Feb 23 13:07:24 crc systemd[1]: Starting Kubernetes Kubelet... Feb 23 13:07:24 crc restorecon[4697]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:24 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 13:07:25 crc restorecon[4697]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 23 13:07:25 crc kubenswrapper[4851]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:07:25 crc kubenswrapper[4851]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 13:07:25 crc kubenswrapper[4851]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:07:25 crc kubenswrapper[4851]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:07:25 crc kubenswrapper[4851]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 13:07:25 crc kubenswrapper[4851]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.751465 4851 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.756946 4851 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.756995 4851 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757001 4851 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757006 4851 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757011 4851 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757017 4851 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757022 4851 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757029 4851 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757035 4851 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757040 4851 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757045 4851 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757050 4851 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757054 4851 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757058 4851 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757069 4851 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757074 4851 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757078 4851 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757083 4851 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757087 4851 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757091 4851 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757095 4851 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757101 4851 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757106 4851 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757111 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757116 4851 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757122 4851 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757127 4851 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757131 4851 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757135 4851 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757139 4851 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757143 4851 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757148 4851 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757152 4851 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757156 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757160 4851 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757164 4851 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757168 4851 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757172 4851 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757176 4851 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757181 4851 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757186 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757191 4851 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757195 4851 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757199 4851 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757203 4851 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757208 4851 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757216 4851 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757221 4851 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757225 4851 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757231 4851 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757235 4851 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757241 4851 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757246 4851 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757252 4851 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757256 4851 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757260 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757264 4851 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757268 4851 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757272 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757276 4851 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757280 4851 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757285 4851 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757289 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757293 4851 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757298 4851 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757302 4851 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757306 4851 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757310 4851 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757315 4851 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757320 4851 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.757324 4851 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757471 4851 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757485 4851 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757500 4851 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757514 4851 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757522 4851 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757528 4851 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757537 4851 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757547 4851 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757554 4851 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757560 4851 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757566 4851 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757573 4851 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757579 4851 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757585 4851 flags.go:64] FLAG: --cgroup-root="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757590 4851 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757596 4851 flags.go:64] FLAG: --client-ca-file="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757600 4851 flags.go:64] FLAG: --cloud-config="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757606 4851 flags.go:64] FLAG: --cloud-provider="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757611 4851 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757617 4851 flags.go:64] FLAG: --cluster-domain="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757623 4851 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757628 4851 flags.go:64] FLAG: --config-dir="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757633 4851 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757640 4851 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757647 4851 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757652 4851 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757658 4851 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757663 4851 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757669 4851 flags.go:64] FLAG: --contention-profiling="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757675 4851 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757680 4851 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757686 4851 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757690 4851 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757697 4851 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757703 4851 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757708 4851 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757713 4851 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757718 4851 flags.go:64] FLAG: --enable-server="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757723 4851 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757732 4851 flags.go:64] FLAG: --event-burst="100" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757738 4851 flags.go:64] FLAG: --event-qps="50" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757742 4851 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757748 4851 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757754 4851 flags.go:64] FLAG: --eviction-hard="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757761 4851 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757767 4851 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757772 4851 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757777 4851 flags.go:64] FLAG: --eviction-soft="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757783 4851 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757789 4851 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757794 4851 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757799 4851 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757805 4851 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757810 4851 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757816 4851 flags.go:64] FLAG: --feature-gates="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757830 4851 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757835 4851 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757841 4851 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757847 4851 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757852 4851 flags.go:64] FLAG: --healthz-port="10248" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757858 4851 flags.go:64] FLAG: --help="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757865 4851 flags.go:64] FLAG: --hostname-override="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757870 4851 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757875 4851 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757880 4851 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757885 4851 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757890 4851 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757895 4851 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757901 4851 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757907 4851 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757913 4851 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757920 4851 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757927 4851 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757933 4851 flags.go:64] FLAG: --kube-reserved="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757938 4851 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757943 4851 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757949 4851 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757954 4851 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757959 4851 flags.go:64] FLAG: --lock-file="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757967 4851 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757974 4851 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757980 4851 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757990 4851 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.757995 4851 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758000 4851 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758006 4851 flags.go:64] FLAG: --logging-format="text" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758011 4851 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758017 4851 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758023 4851 flags.go:64] FLAG: --manifest-url="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758028 4851 flags.go:64] FLAG: --manifest-url-header="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758035 4851 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758041 4851 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758049 4851 flags.go:64] FLAG: --max-pods="110" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758054 4851 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758060 4851 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758065 4851 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758071 4851 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758078 4851 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758083 4851 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758088 4851 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758102 4851 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758108 4851 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758114 4851 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758120 4851 flags.go:64] FLAG: --pod-cidr="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758126 4851 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758136 4851 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758142 4851 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758148 4851 flags.go:64] FLAG: --pods-per-core="0" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758155 4851 flags.go:64] FLAG: --port="10250" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758160 4851 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758166 4851 flags.go:64] FLAG: --provider-id="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758172 4851 flags.go:64] FLAG: --qos-reserved="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758178 4851 flags.go:64] FLAG: --read-only-port="10255" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758184 4851 flags.go:64] FLAG: --register-node="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758190 4851 flags.go:64] FLAG: --register-schedulable="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758196 4851 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758207 4851 flags.go:64] FLAG: --registry-burst="10" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758213 4851 flags.go:64] FLAG: --registry-qps="5" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758218 4851 flags.go:64] FLAG: --reserved-cpus="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758223 4851 flags.go:64] FLAG: --reserved-memory="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758230 4851 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758235 4851 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758241 4851 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758245 4851 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758250 4851 flags.go:64] FLAG: --runonce="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758256 4851 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758261 4851 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758267 4851 flags.go:64] FLAG: --seccomp-default="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758272 4851 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758278 4851 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758283 4851 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758289 4851 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758295 4851 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758301 4851 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758305 4851 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758310 4851 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758323 4851 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758348 4851 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758355 4851 flags.go:64] FLAG: --system-cgroups="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758360 4851 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758369 4851 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758375 4851 flags.go:64] FLAG: --tls-cert-file="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758380 4851 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758387 4851 flags.go:64] FLAG: --tls-min-version="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758393 4851 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758398 4851 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758404 4851 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758409 4851 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758415 4851 flags.go:64] FLAG: --v="2" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758422 4851 flags.go:64] FLAG: --version="false" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758430 4851 flags.go:64] FLAG: --vmodule="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758439 4851 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758444 4851 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758601 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758609 4851 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758614 4851 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758619 4851 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758623 4851 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758630 4851 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758636 4851 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758640 4851 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758645 4851 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758649 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758654 4851 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758659 4851 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758663 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758668 4851 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758673 4851 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758681 4851 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758686 4851 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758690 4851 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758694 4851 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758698 4851 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758702 4851 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758707 4851 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758711 4851 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758715 4851 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758719 4851 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758723 4851 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758728 4851 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758732 4851 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758736 4851 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758742 4851 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758747 4851 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758753 4851 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758757 4851 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758762 4851 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758767 4851 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758772 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758777 4851 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758783 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758788 4851 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758792 4851 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758796 4851 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758801 4851 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758805 4851 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758811 4851 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758815 4851 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758820 4851 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758824 4851 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758832 4851 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758836 4851 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758840 4851 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758844 4851 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758848 4851 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758853 4851 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758857 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758862 4851 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758866 4851 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758870 4851 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758874 4851 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758880 4851 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758885 4851 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758889 4851 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758893 4851 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758898 4851 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758902 4851 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758906 4851 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758911 4851 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758916 4851 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758920 4851 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758926 4851 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758931 4851 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.758938 4851 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.758953 4851 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.771460 4851 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.771508 4851 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771603 4851 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771611 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771616 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771634 4851 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771639 4851 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771643 4851 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771648 4851 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771652 4851 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771656 4851 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771660 4851 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771663 4851 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771667 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771671 4851 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771675 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771679 4851 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771682 4851 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771686 4851 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771689 4851 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771694 4851 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771715 4851 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771720 4851 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771724 4851 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771728 4851 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771732 4851 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771736 4851 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771740 4851 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771743 4851 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771747 4851 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771751 4851 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771756 4851 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771760 4851 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771764 4851 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771768 4851 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771772 4851 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771791 4851 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771794 4851 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771799 4851 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771802 4851 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771806 4851 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771809 4851 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771813 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771817 4851 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771820 4851 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771824 4851 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771827 4851 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771831 4851 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771834 4851 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771839 4851 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771845 4851 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771849 4851 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771868 4851 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771873 4851 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771877 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771882 4851 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771885 4851 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771889 4851 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771892 4851 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771896 4851 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771899 4851 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771903 4851 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771907 4851 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771914 4851 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771919 4851 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771923 4851 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771928 4851 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771946 4851 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771951 4851 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771956 4851 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771959 4851 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771963 4851 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.771967 4851 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.771975 4851 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772153 4851 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772176 4851 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772180 4851 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772184 4851 feature_gate.go:330] unrecognized feature gate: Example Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772188 4851 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772191 4851 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772195 4851 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772199 4851 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772203 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772206 4851 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772210 4851 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772214 4851 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772218 4851 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772221 4851 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772225 4851 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772230 4851 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772237 4851 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772255 4851 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772259 4851 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772262 4851 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772268 4851 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772273 4851 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772278 4851 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772283 4851 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772289 4851 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772294 4851 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772297 4851 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772301 4851 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772306 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772309 4851 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772337 4851 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772342 4851 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772347 4851 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772352 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772358 4851 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772362 4851 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772365 4851 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772369 4851 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772374 4851 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772377 4851 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772381 4851 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772384 4851 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772388 4851 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772408 4851 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772412 4851 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772416 4851 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772419 4851 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772423 4851 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772426 4851 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772430 4851 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772433 4851 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772437 4851 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772440 4851 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772444 4851 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772448 4851 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772451 4851 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772455 4851 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772458 4851 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772462 4851 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772465 4851 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772469 4851 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772486 4851 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772490 4851 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772493 4851 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772497 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772501 4851 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772504 4851 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772508 4851 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772511 4851 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772515 4851 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.772520 4851 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.772526 4851 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.773529 4851 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.777247 4851 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.777350 4851 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.779009 4851 server.go:997] "Starting client certificate rotation" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.779031 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.779438 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-16 11:03:12.013968008 +0000 UTC Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.779569 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.801703 4851 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:07:25 crc kubenswrapper[4851]: E0223 13:07:25.805745 4851 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.806748 4851 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.822689 4851 log.go:25] "Validated CRI v1 runtime API" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.852987 4851 log.go:25] "Validated CRI v1 image API" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.858971 4851 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.864999 4851 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-23-13-02-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.865042 4851 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.881464 4851 manager.go:217] Machine: {Timestamp:2026-02-23 13:07:25.878456816 +0000 UTC m=+0.560160514 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:147f526a-cf20-4c21-b33c-eacf21a9553b BootID:8ce3a304-1be6-4250-ba8b-ce6e05e05ddb Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:00:fb:28 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:00:fb:28 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8e:6a:ab Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cb:30:f2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a4:4e:08 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3f:ba:42 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ae:c8:53:6d:3d:eb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9e:37:e8:d1:e2:63 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.881707 4851 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.881931 4851 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.883057 4851 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.883228 4851 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.883266 4851 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.883588 4851 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.883600 4851 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.884135 4851 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.884167 4851 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.884392 4851 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.884498 4851 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.888089 4851 kubelet.go:418] "Attempting to sync node with API server" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.888113 4851 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.888139 4851 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.888155 4851 kubelet.go:324] "Adding apiserver pod source" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.888168 4851 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.892098 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.892146 4851 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 23 13:07:25 crc kubenswrapper[4851]: E0223 13:07:25.892170 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.892829 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:25 crc kubenswrapper[4851]: E0223 13:07:25.892957 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.893264 4851 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.895093 4851 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896539 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896564 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896572 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896580 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896593 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896601 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896608 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896623 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896634 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896642 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896658 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.896665 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.898983 4851 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.903079 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.903203 4851 server.go:1280] "Started kubelet" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.904261 4851 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.904301 4851 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 13:07:25 crc systemd[1]: Started Kubernetes Kubelet. Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.905487 4851 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.908558 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.908611 4851 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.908733 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:40:46.171053174 +0000 UTC Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.909377 4851 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.909427 4851 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.909532 4851 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 23 13:07:25 crc kubenswrapper[4851]: E0223 13:07:25.909594 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.911149 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:25 crc kubenswrapper[4851]: E0223 13:07:25.911257 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:25 crc kubenswrapper[4851]: E0223 13:07:25.911166 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="200ms" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.915522 4851 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.915556 4851 factory.go:55] Registering systemd factory Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.915567 4851 factory.go:221] Registration of the systemd container factory successfully Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.915542 4851 server.go:460] "Adding debug handlers to kubelet server" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.916260 4851 factory.go:153] Registering CRI-O factory Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.916285 4851 factory.go:221] Registration of the crio container factory successfully Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.916324 4851 factory.go:103] Registering Raw factory Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.916366 4851 manager.go:1196] Started watching for new ooms in manager Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.917017 4851 manager.go:319] Starting recovery of all containers Feb 23 13:07:25 crc kubenswrapper[4851]: E0223 13:07:25.915874 4851 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.5:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896e20c4e9a8ddc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,LastTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936174 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936262 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936300 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936319 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936358 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936383 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936408 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936438 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936460 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936477 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936506 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936524 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936551 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936576 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936600 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936623 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936653 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936674 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936694 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936725 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936744 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936770 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936792 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936814 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936845 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936866 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936897 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936931 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936951 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936969 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.936992 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937017 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937043 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937063 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937082 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937107 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937131 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937154 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937174 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937196 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937219 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937239 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937258 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937280 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937299 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937341 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937360 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937378 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937400 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937419 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937442 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937461 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937492 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937512 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937538 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937563 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937587 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937612 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937634 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937658 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937675 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937694 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937716 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937733 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937761 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937782 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937801 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937822 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937846 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937868 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937895 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937918 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937945 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.937967 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940138 4851 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940182 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940210 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940226 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940240 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940257 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940269 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940287 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940311 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940349 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940370 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940383 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940399 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940420 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940432 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940447 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940461 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940476 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940490 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940564 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940583 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940599 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940611 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940625 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940643 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940657 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940673 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940685 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940702 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940720 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940735 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940760 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940781 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940800 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940828 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.940850 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941074 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941172 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941207 4851 manager.go:324] Recovery completed Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941212 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941255 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941358 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941405 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941435 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941490 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941527 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941558 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941587 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941611 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941658 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941683 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941716 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941744 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941768 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941798 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941822 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941860 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941909 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941935 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941965 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.941990 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942017 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942066 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942092 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942123 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942148 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942175 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942267 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942300 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942398 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942444 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942470 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942504 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942527 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942557 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942584 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942839 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.942955 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943020 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943067 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943110 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943165 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943237 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943270 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943321 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943401 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943443 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943724 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943787 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943830 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943862 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943899 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.943923 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945473 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945574 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945591 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945603 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945619 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945634 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945647 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945662 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945676 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945690 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945702 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945716 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945727 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945740 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945752 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945767 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945780 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945792 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945804 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945816 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945828 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945838 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945849 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945862 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945876 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945890 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945902 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945913 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945925 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945935 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945946 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945959 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945972 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945983 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.945995 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.946004 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.946015 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.946025 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.946035 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.946048 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.946058 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.946070 4851 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.946081 4851 reconstruct.go:97] "Volume reconstruction finished" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.946088 4851 reconciler.go:26] "Reconciler: start to sync state" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.958514 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.962519 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.962560 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.962572 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.964749 4851 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.964781 4851 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.964807 4851 state_mem.go:36] "Initialized new in-memory state store" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.965840 4851 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.967363 4851 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.967417 4851 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.967458 4851 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 13:07:25 crc kubenswrapper[4851]: E0223 13:07:25.967511 4851 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 13:07:25 crc kubenswrapper[4851]: W0223 13:07:25.968600 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:25 crc kubenswrapper[4851]: E0223 13:07:25.968678 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.978979 4851 policy_none.go:49] "None policy: Start" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.980613 4851 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 13:07:25 crc kubenswrapper[4851]: I0223 13:07:25.980688 4851 state_mem.go:35] "Initializing new in-memory state store" Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.009735 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.027360 4851 manager.go:334] "Starting Device Plugin manager" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.027423 4851 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.027439 4851 server.go:79] "Starting device plugin registration server" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.027995 4851 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.028014 4851 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.028218 4851 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.028377 4851 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.028395 4851 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.034599 4851 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.068005 4851 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.068129 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.069059 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.069111 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.069122 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.069299 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.069444 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.069487 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.070317 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.070365 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.070365 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.070399 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.070409 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.070379 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.070650 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.070745 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.070784 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.071552 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.071571 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.071579 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.071701 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.071756 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.071768 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.071888 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.071975 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.072024 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.072729 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.072759 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.072769 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.072775 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.072793 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.072805 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.072928 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073024 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073065 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073700 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073721 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073730 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073734 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073750 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073759 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073871 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.073889 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.074762 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.074780 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.074790 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.113143 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="400ms" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.128576 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.129920 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.129952 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.129984 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.130009 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.130648 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147522 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147562 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147582 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147601 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147621 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147727 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147758 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147829 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147868 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147891 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147909 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147927 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147946 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147962 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.147985 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249652 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249740 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249777 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249798 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249819 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249843 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249869 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249893 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249975 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249978 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250009 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250045 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250071 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250030 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250050 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249999 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.249982 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250081 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250054 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250188 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250010 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250272 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250351 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250351 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250387 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250411 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250454 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250476 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250480 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.250585 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.331134 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.332557 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.332604 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.332622 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.332652 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.333296 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.398892 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.404672 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.430060 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: W0223 13:07:26.439499 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8c2e42f4b517dd0c9525cff026eb172930924ebfa4681e0801b537f58bf2d8ae WatchSource:0}: Error finding container 8c2e42f4b517dd0c9525cff026eb172930924ebfa4681e0801b537f58bf2d8ae: Status 404 returned error can't find the container with id 8c2e42f4b517dd0c9525cff026eb172930924ebfa4681e0801b537f58bf2d8ae Feb 23 13:07:26 crc kubenswrapper[4851]: W0223 13:07:26.442550 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4d8e5fade8dcba65c133e2ad86c55eeb7c53ac9c986eefdc93a12c75e488c1cf WatchSource:0}: Error finding container 4d8e5fade8dcba65c133e2ad86c55eeb7c53ac9c986eefdc93a12c75e488c1cf: Status 404 returned error can't find the container with id 4d8e5fade8dcba65c133e2ad86c55eeb7c53ac9c986eefdc93a12c75e488c1cf Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.445575 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: W0223 13:07:26.450977 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-41c177643683a2a55ecd1150b1ba4f4ac1a47f68935bf7609a6d31362c869ebc WatchSource:0}: Error finding container 41c177643683a2a55ecd1150b1ba4f4ac1a47f68935bf7609a6d31362c869ebc: Status 404 returned error can't find the container with id 41c177643683a2a55ecd1150b1ba4f4ac1a47f68935bf7609a6d31362c869ebc Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.452542 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:07:26 crc kubenswrapper[4851]: W0223 13:07:26.463282 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ac290060e936a1ac64d2c4206b36762dc70535cf196810f343697a0d62d5ab97 WatchSource:0}: Error finding container ac290060e936a1ac64d2c4206b36762dc70535cf196810f343697a0d62d5ab97: Status 404 returned error can't find the container with id ac290060e936a1ac64d2c4206b36762dc70535cf196810f343697a0d62d5ab97 Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.467157 4851 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.5:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896e20c4e9a8ddc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,LastTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 13:07:26 crc kubenswrapper[4851]: W0223 13:07:26.481724 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b84b58ca0d1195bf264002c0de00031bf04d770239c37e9edfce0b4596185e29 WatchSource:0}: Error finding container b84b58ca0d1195bf264002c0de00031bf04d770239c37e9edfce0b4596185e29: Status 404 returned error can't find the container with id b84b58ca0d1195bf264002c0de00031bf04d770239c37e9edfce0b4596185e29 Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.514563 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="800ms" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.733785 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.735713 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.735762 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.735772 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.735798 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.736389 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Feb 23 13:07:26 crc kubenswrapper[4851]: W0223 13:07:26.773913 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.774007 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.905007 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:26 crc kubenswrapper[4851]: W0223 13:07:26.908278 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:26 crc kubenswrapper[4851]: E0223 13:07:26.908382 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.909264 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 08:20:15.517702916 +0000 UTC Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.973293 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b84b58ca0d1195bf264002c0de00031bf04d770239c37e9edfce0b4596185e29"} Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.974196 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ac290060e936a1ac64d2c4206b36762dc70535cf196810f343697a0d62d5ab97"} Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.975557 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41c177643683a2a55ecd1150b1ba4f4ac1a47f68935bf7609a6d31362c869ebc"} Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.976370 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d8e5fade8dcba65c133e2ad86c55eeb7c53ac9c986eefdc93a12c75e488c1cf"} Feb 23 13:07:26 crc kubenswrapper[4851]: I0223 13:07:26.977128 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8c2e42f4b517dd0c9525cff026eb172930924ebfa4681e0801b537f58bf2d8ae"} Feb 23 13:07:27 crc kubenswrapper[4851]: W0223 13:07:27.084396 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:27 crc kubenswrapper[4851]: E0223 13:07:27.084563 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:27 crc kubenswrapper[4851]: W0223 13:07:27.273571 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:27 crc kubenswrapper[4851]: E0223 13:07:27.273644 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:27 crc kubenswrapper[4851]: E0223 13:07:27.316112 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="1.6s" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.537288 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.538728 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.538763 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.538772 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.538809 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:07:27 crc kubenswrapper[4851]: E0223 13:07:27.539371 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.904770 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.904891 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.909361 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:03:27.802060759 +0000 UTC Feb 23 13:07:27 crc kubenswrapper[4851]: E0223 13:07:27.909539 4851 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.985291 4851 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="dd474662f340758a0ec49a33ed5d4b78b595a3eed41c679d6bfd6965d123e224" exitCode=0 Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.985411 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"dd474662f340758a0ec49a33ed5d4b78b595a3eed41c679d6bfd6965d123e224"} Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.985476 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.987066 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.987111 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.987122 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.988294 4851 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0" exitCode=0 Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.988387 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0"} Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.988515 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.989842 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.989866 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.989879 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.994602 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.994547 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b066313052e450595bc08b10fa6316bfd3dd51d9c531f6c771a3a5ac138d785b"} Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.994688 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d0e5509ea096a7c151dfda3739dbbbe80101948468b414e6651996230399b3b9"} Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.994721 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8"} Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.994749 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa"} Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.996111 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.996184 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.996218 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.997001 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2" exitCode=0 Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.997061 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2"} Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.997206 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.998418 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.998468 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.998489 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.999229 4851 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083" exitCode=0 Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.999278 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083"} Feb 23 13:07:27 crc kubenswrapper[4851]: I0223 13:07:27.999364 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:28 crc kubenswrapper[4851]: I0223 13:07:28.000088 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:28 crc kubenswrapper[4851]: I0223 13:07:28.000122 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:28 crc kubenswrapper[4851]: I0223 13:07:28.000133 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:28 crc kubenswrapper[4851]: I0223 13:07:28.004044 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:28 crc kubenswrapper[4851]: I0223 13:07:28.005348 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:28 crc kubenswrapper[4851]: I0223 13:07:28.005373 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:28 crc kubenswrapper[4851]: I0223 13:07:28.005384 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:28 crc kubenswrapper[4851]: I0223 13:07:28.904710 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:28 crc kubenswrapper[4851]: I0223 13:07:28.909819 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:52:44.829255769 +0000 UTC Feb 23 13:07:28 crc kubenswrapper[4851]: E0223 13:07:28.917411 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="3.2s" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.006044 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871"} Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.006130 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78"} Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.006146 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7"} Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.008111 4851 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f" exitCode=0 Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.008206 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f"} Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.008273 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.009809 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.009860 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.009872 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.010949 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"717784a4b23f7582925c2009757521d85edfb37b0f177714656211cc909eec2e"} Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.011041 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.012027 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.012053 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.012062 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.015455 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.015460 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"68e437301c1ee7791c731beb8f4213ee36b7eb4e71d9141b1e618c740f554202"} Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.015511 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.015513 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"efba4302294164dc5afd345375abda413072c51da085246a3a27c4e727a2c0b7"} Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.015629 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"73d34ef95174e2912058b7d4a786eb1dcbee4f58993800508fde8ff98e692869"} Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.016297 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.016343 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.016390 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.016430 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.016446 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.016455 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.046314 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.139940 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.141627 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.141664 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.141675 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.141700 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:07:29 crc kubenswrapper[4851]: E0223 13:07:29.142166 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Feb 23 13:07:29 crc kubenswrapper[4851]: W0223 13:07:29.555882 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:29 crc kubenswrapper[4851]: E0223 13:07:29.556044 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:29 crc kubenswrapper[4851]: W0223 13:07:29.603347 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:29 crc kubenswrapper[4851]: E0223 13:07:29.603459 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.696068 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:29 crc kubenswrapper[4851]: W0223 13:07:29.703313 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:29 crc kubenswrapper[4851]: E0223 13:07:29.703487 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.720350 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.904869 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:29 crc kubenswrapper[4851]: I0223 13:07:29.909980 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:53:38.931645495 +0000 UTC Feb 23 13:07:29 crc kubenswrapper[4851]: W0223 13:07:29.971188 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:07:29 crc kubenswrapper[4851]: E0223 13:07:29.971367 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.022251 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf"} Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.022306 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"074a0f9a3c6173dc20b7a23899c31eb6d9b9802ba9a59eb013c29808940f57b9"} Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.022475 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.023662 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.023720 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.023734 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.029294 4851 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87" exitCode=0 Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.029657 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.029701 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.029721 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.029657 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.029369 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87"} Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.030786 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.031687 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.031721 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.031734 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.032593 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.032620 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.032630 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.033076 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.033100 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.033110 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.033635 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.033661 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.033672 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.537693 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:30 crc kubenswrapper[4851]: I0223 13:07:30.910136 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:41:05.929991764 +0000 UTC Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.037831 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038310 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569"} Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038401 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038420 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073"} Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038459 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038471 4851 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038487 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc"} Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038528 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7"} Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038566 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038749 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038798 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.038814 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.039518 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.039564 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.039578 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.039948 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.040004 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.040023 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:31 crc kubenswrapper[4851]: I0223 13:07:31.910813 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:29:18.216612997 +0000 UTC Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.046398 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696"} Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.046433 4851 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.046461 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.046477 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.046558 4851 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.046607 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.046682 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.047653 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.047698 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.047709 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.047760 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.047803 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.047824 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.048627 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.048660 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.048673 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.274994 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.343023 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.344899 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.344952 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.344968 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.345016 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:07:32 crc kubenswrapper[4851]: I0223 13:07:32.911596 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:01:29.266754257 +0000 UTC Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.048898 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.050143 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.050217 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.050239 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.625823 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.626053 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.627646 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.627738 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.627765 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:33 crc kubenswrapper[4851]: I0223 13:07:33.912520 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:36:35.360591365 +0000 UTC Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.039577 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.039932 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.042138 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.042196 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.042208 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.554455 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.554688 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.556054 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.556095 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.556106 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:34 crc kubenswrapper[4851]: I0223 13:07:34.912769 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:49:28.724362511 +0000 UTC Feb 23 13:07:35 crc kubenswrapper[4851]: I0223 13:07:35.913859 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:06:30.425437583 +0000 UTC Feb 23 13:07:36 crc kubenswrapper[4851]: E0223 13:07:36.034777 4851 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 13:07:36 crc kubenswrapper[4851]: I0223 13:07:36.125465 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 23 13:07:36 crc kubenswrapper[4851]: I0223 13:07:36.125659 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:36 crc kubenswrapper[4851]: I0223 13:07:36.127976 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:36 crc kubenswrapper[4851]: I0223 13:07:36.128086 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:36 crc kubenswrapper[4851]: I0223 13:07:36.128130 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:36 crc kubenswrapper[4851]: I0223 13:07:36.914731 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:49:05.541678899 +0000 UTC Feb 23 13:07:37 crc kubenswrapper[4851]: I0223 13:07:37.915228 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:30:27.225039454 +0000 UTC Feb 23 13:07:38 crc kubenswrapper[4851]: I0223 13:07:38.136048 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 23 13:07:38 crc kubenswrapper[4851]: I0223 13:07:38.136292 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:38 crc kubenswrapper[4851]: I0223 13:07:38.137743 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:38 crc kubenswrapper[4851]: I0223 13:07:38.137772 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:38 crc kubenswrapper[4851]: I0223 13:07:38.137782 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:38 crc kubenswrapper[4851]: I0223 13:07:38.915493 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:08:06.511076667 +0000 UTC Feb 23 13:07:39 crc kubenswrapper[4851]: I0223 13:07:39.915640 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:51:30.245271069 +0000 UTC Feb 23 13:07:40 crc kubenswrapper[4851]: I0223 13:07:40.600917 4851 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48876->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 23 13:07:40 crc kubenswrapper[4851]: I0223 13:07:40.600989 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48876->192.168.126.11:17697: read: connection reset by peer" Feb 23 13:07:40 crc kubenswrapper[4851]: E0223 13:07:40.830654 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 23 13:07:40 crc kubenswrapper[4851]: W0223 13:07:40.830758 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z Feb 23 13:07:40 crc kubenswrapper[4851]: E0223 13:07:40.830839 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:40 crc kubenswrapper[4851]: E0223 13:07:40.832114 4851 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:40 crc kubenswrapper[4851]: I0223 13:07:40.833173 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z Feb 23 13:07:40 crc kubenswrapper[4851]: E0223 13:07:40.833462 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 13:07:40 crc kubenswrapper[4851]: W0223 13:07:40.834631 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z Feb 23 13:07:40 crc kubenswrapper[4851]: E0223 13:07:40.834698 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:40 crc kubenswrapper[4851]: W0223 13:07:40.834734 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z Feb 23 13:07:40 crc kubenswrapper[4851]: E0223 13:07:40.834783 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:40 crc kubenswrapper[4851]: W0223 13:07:40.836307 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z Feb 23 13:07:40 crc kubenswrapper[4851]: E0223 13:07:40.836388 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:40 crc kubenswrapper[4851]: E0223 13:07:40.837171 4851 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896e20c4e9a8ddc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,LastTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 13:07:40 crc kubenswrapper[4851]: I0223 13:07:40.839240 4851 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 13:07:40 crc kubenswrapper[4851]: I0223 13:07:40.839300 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 13:07:40 crc kubenswrapper[4851]: I0223 13:07:40.843551 4851 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 13:07:40 crc kubenswrapper[4851]: I0223 13:07:40.843625 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 13:07:40 crc kubenswrapper[4851]: I0223 13:07:40.907724 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:40Z is after 2026-02-23T05:33:13Z Feb 23 13:07:40 crc kubenswrapper[4851]: I0223 13:07:40.916099 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:21:46.632435108 +0000 UTC Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.074147 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.076230 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="074a0f9a3c6173dc20b7a23899c31eb6d9b9802ba9a59eb013c29808940f57b9" exitCode=255 Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.076290 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"074a0f9a3c6173dc20b7a23899c31eb6d9b9802ba9a59eb013c29808940f57b9"} Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.076507 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.077397 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.077431 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.077443 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.078018 4851 scope.go:117] "RemoveContainer" containerID="074a0f9a3c6173dc20b7a23899c31eb6d9b9802ba9a59eb013c29808940f57b9" Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.907901 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:41Z is after 2026-02-23T05:33:13Z Feb 23 13:07:41 crc kubenswrapper[4851]: I0223 13:07:41.916312 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:20:07.717986806 +0000 UTC Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.047830 4851 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.048213 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.085177 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.087707 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"31292e8a3f233e0ea5aabb78edc30ba9d8f70a97d147ee8bf5889e642cd561a8"} Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.087859 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.088885 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.088919 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.088931 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.906637 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:42Z is after 2026-02-23T05:33:13Z Feb 23 13:07:42 crc kubenswrapper[4851]: I0223 13:07:42.917019 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:40:18.453481475 +0000 UTC Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.091620 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.092158 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.093769 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31292e8a3f233e0ea5aabb78edc30ba9d8f70a97d147ee8bf5889e642cd561a8" exitCode=255 Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.093814 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"31292e8a3f233e0ea5aabb78edc30ba9d8f70a97d147ee8bf5889e642cd561a8"} Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.093879 4851 scope.go:117] "RemoveContainer" containerID="074a0f9a3c6173dc20b7a23899c31eb6d9b9802ba9a59eb013c29808940f57b9" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.094010 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.094893 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.094930 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.094943 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.095692 4851 scope.go:117] "RemoveContainer" containerID="31292e8a3f233e0ea5aabb78edc30ba9d8f70a97d147ee8bf5889e642cd561a8" Feb 23 13:07:43 crc kubenswrapper[4851]: E0223 13:07:43.095881 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.630233 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.630569 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.631624 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.631662 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.631673 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.909853 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:43Z is after 2026-02-23T05:33:13Z Feb 23 13:07:43 crc kubenswrapper[4851]: I0223 13:07:43.917112 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:25:29.936920546 +0000 UTC Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.045716 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.098190 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.100857 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.101634 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.101666 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.101674 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.102227 4851 scope.go:117] "RemoveContainer" containerID="31292e8a3f233e0ea5aabb78edc30ba9d8f70a97d147ee8bf5889e642cd561a8" Feb 23 13:07:44 crc kubenswrapper[4851]: E0223 13:07:44.102430 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.109694 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.909361 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:44Z is after 2026-02-23T05:33:13Z Feb 23 13:07:44 crc kubenswrapper[4851]: I0223 13:07:44.918077 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:57:49.189421354 +0000 UTC Feb 23 13:07:45 crc kubenswrapper[4851]: I0223 13:07:45.103270 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:45 crc kubenswrapper[4851]: I0223 13:07:45.104260 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:45 crc kubenswrapper[4851]: I0223 13:07:45.104311 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:45 crc kubenswrapper[4851]: I0223 13:07:45.104364 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:45 crc kubenswrapper[4851]: I0223 13:07:45.105173 4851 scope.go:117] "RemoveContainer" containerID="31292e8a3f233e0ea5aabb78edc30ba9d8f70a97d147ee8bf5889e642cd561a8" Feb 23 13:07:45 crc kubenswrapper[4851]: E0223 13:07:45.105499 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:07:45 crc kubenswrapper[4851]: I0223 13:07:45.907215 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:45Z is after 2026-02-23T05:33:13Z Feb 23 13:07:45 crc kubenswrapper[4851]: I0223 13:07:45.918604 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 11:06:01.81513654 +0000 UTC Feb 23 13:07:46 crc kubenswrapper[4851]: E0223 13:07:46.034880 4851 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 13:07:46 crc kubenswrapper[4851]: I0223 13:07:46.907847 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:46Z is after 2026-02-23T05:33:13Z Feb 23 13:07:46 crc kubenswrapper[4851]: I0223 13:07:46.919291 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:00:32.304136046 +0000 UTC Feb 23 13:07:47 crc kubenswrapper[4851]: I0223 13:07:47.234493 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:47 crc kubenswrapper[4851]: I0223 13:07:47.235894 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:47 crc kubenswrapper[4851]: I0223 13:07:47.235928 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:47 crc kubenswrapper[4851]: I0223 13:07:47.235936 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:47 crc kubenswrapper[4851]: E0223 13:07:47.235915 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:47Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 13:07:47 crc kubenswrapper[4851]: I0223 13:07:47.235954 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:07:47 crc kubenswrapper[4851]: E0223 13:07:47.240688 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:47Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 13:07:47 crc kubenswrapper[4851]: I0223 13:07:47.907804 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:47Z is after 2026-02-23T05:33:13Z Feb 23 13:07:47 crc kubenswrapper[4851]: I0223 13:07:47.920415 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:39:59.541913874 +0000 UTC Feb 23 13:07:47 crc kubenswrapper[4851]: W0223 13:07:47.943230 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:47Z is after 2026-02-23T05:33:13Z Feb 23 13:07:47 crc kubenswrapper[4851]: E0223 13:07:47.943290 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:48 crc kubenswrapper[4851]: I0223 13:07:48.178751 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 23 13:07:48 crc kubenswrapper[4851]: I0223 13:07:48.179106 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:48 crc kubenswrapper[4851]: I0223 13:07:48.180755 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:48 crc kubenswrapper[4851]: I0223 13:07:48.180801 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:48 crc kubenswrapper[4851]: I0223 13:07:48.180831 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:48 crc kubenswrapper[4851]: I0223 13:07:48.196897 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 23 13:07:48 crc kubenswrapper[4851]: I0223 13:07:48.907283 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:48Z is after 2026-02-23T05:33:13Z Feb 23 13:07:48 crc kubenswrapper[4851]: I0223 13:07:48.920812 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:56:26.989478814 +0000 UTC Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.091589 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 13:07:49 crc kubenswrapper[4851]: E0223 13:07:49.094744 4851 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:49 crc kubenswrapper[4851]: W0223 13:07:49.097351 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:49Z is after 2026-02-23T05:33:13Z Feb 23 13:07:49 crc kubenswrapper[4851]: E0223 13:07:49.097422 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.111881 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.113242 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.113275 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.113283 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.567599 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.567771 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.568825 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.568861 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.568874 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.569442 4851 scope.go:117] "RemoveContainer" containerID="31292e8a3f233e0ea5aabb78edc30ba9d8f70a97d147ee8bf5889e642cd561a8" Feb 23 13:07:49 crc kubenswrapper[4851]: E0223 13:07:49.569641 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:07:49 crc kubenswrapper[4851]: W0223 13:07:49.617211 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:49Z is after 2026-02-23T05:33:13Z Feb 23 13:07:49 crc kubenswrapper[4851]: E0223 13:07:49.617364 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.906940 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:49Z is after 2026-02-23T05:33:13Z Feb 23 13:07:49 crc kubenswrapper[4851]: I0223 13:07:49.920975 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:46:31.351050892 +0000 UTC Feb 23 13:07:50 crc kubenswrapper[4851]: W0223 13:07:50.755093 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:50Z is after 2026-02-23T05:33:13Z Feb 23 13:07:50 crc kubenswrapper[4851]: E0223 13:07:50.755202 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:07:50 crc kubenswrapper[4851]: E0223 13:07:50.841496 4851 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:50Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896e20c4e9a8ddc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,LastTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 13:07:50 crc kubenswrapper[4851]: I0223 13:07:50.909291 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:50Z is after 2026-02-23T05:33:13Z Feb 23 13:07:50 crc kubenswrapper[4851]: I0223 13:07:50.921902 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:21:03.810893593 +0000 UTC Feb 23 13:07:51 crc kubenswrapper[4851]: I0223 13:07:51.905875 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:51Z is after 2026-02-23T05:33:13Z Feb 23 13:07:51 crc kubenswrapper[4851]: I0223 13:07:51.922365 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 09:51:00.251215624 +0000 UTC Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.048106 4851 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.048173 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.048230 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.048412 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.049426 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.049466 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.049478 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.050086 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.050278 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8" gracePeriod=30 Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.584127 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.584383 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.585773 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.585843 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.585857 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.586609 4851 scope.go:117] "RemoveContainer" containerID="31292e8a3f233e0ea5aabb78edc30ba9d8f70a97d147ee8bf5889e642cd561a8" Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.906489 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:52Z is after 2026-02-23T05:33:13Z Feb 23 13:07:52 crc kubenswrapper[4851]: I0223 13:07:52.923075 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:37:28.611201802 +0000 UTC Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.122855 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.123181 4851 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8" exitCode=255 Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.123253 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8"} Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.123288 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bca4a5c072768706140bbe965a5b2fdcfaf1e4b06f2f9043f1a08efe2717fe0b"} Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.123390 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.124220 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.124255 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.124267 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.125960 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.127274 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f2d739d71b9a0278fc7699ce6bc7a08c4d28c01479a43689cee6e48144a0572e"} Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.127402 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.128109 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.128144 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.128154 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.906864 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:53Z is after 2026-02-23T05:33:13Z Feb 23 13:07:53 crc kubenswrapper[4851]: I0223 13:07:53.923319 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:33:07.693042046 +0000 UTC Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.130958 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.131571 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.132925 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f2d739d71b9a0278fc7699ce6bc7a08c4d28c01479a43689cee6e48144a0572e" exitCode=255 Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.132964 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f2d739d71b9a0278fc7699ce6bc7a08c4d28c01479a43689cee6e48144a0572e"} Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.133008 4851 scope.go:117] "RemoveContainer" containerID="31292e8a3f233e0ea5aabb78edc30ba9d8f70a97d147ee8bf5889e642cd561a8" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.133123 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.133969 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.134006 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.134016 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.134573 4851 scope.go:117] "RemoveContainer" containerID="f2d739d71b9a0278fc7699ce6bc7a08c4d28c01479a43689cee6e48144a0572e" Feb 23 13:07:54 crc kubenswrapper[4851]: E0223 13:07:54.134754 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:07:54 crc kubenswrapper[4851]: E0223 13:07:54.239974 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:54Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.241073 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.242713 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.242773 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.242789 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.242820 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:07:54 crc kubenswrapper[4851]: E0223 13:07:54.245574 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:54Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.908997 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:54Z is after 2026-02-23T05:33:13Z Feb 23 13:07:54 crc kubenswrapper[4851]: I0223 13:07:54.924373 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:36:01.787007118 +0000 UTC Feb 23 13:07:55 crc kubenswrapper[4851]: I0223 13:07:55.139738 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 13:07:55 crc kubenswrapper[4851]: I0223 13:07:55.909440 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:55Z is after 2026-02-23T05:33:13Z Feb 23 13:07:55 crc kubenswrapper[4851]: I0223 13:07:55.925269 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:33:58.032515432 +0000 UTC Feb 23 13:07:56 crc kubenswrapper[4851]: E0223 13:07:56.034985 4851 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 13:07:56 crc kubenswrapper[4851]: I0223 13:07:56.907152 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:56Z is after 2026-02-23T05:33:13Z Feb 23 13:07:56 crc kubenswrapper[4851]: I0223 13:07:56.926042 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 04:34:49.676716766 +0000 UTC Feb 23 13:07:57 crc kubenswrapper[4851]: I0223 13:07:57.906860 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:57Z is after 2026-02-23T05:33:13Z Feb 23 13:07:57 crc kubenswrapper[4851]: I0223 13:07:57.926409 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:41:23.019543228 +0000 UTC Feb 23 13:07:58 crc kubenswrapper[4851]: I0223 13:07:58.906649 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:58Z is after 2026-02-23T05:33:13Z Feb 23 13:07:58 crc kubenswrapper[4851]: I0223 13:07:58.927137 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:25:47.207219402 +0000 UTC Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.046386 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.046621 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.047845 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.047892 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.047903 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.566823 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.567009 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.568228 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.568258 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.568267 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.568758 4851 scope.go:117] "RemoveContainer" containerID="f2d739d71b9a0278fc7699ce6bc7a08c4d28c01479a43689cee6e48144a0572e" Feb 23 13:07:59 crc kubenswrapper[4851]: E0223 13:07:59.568910 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.906837 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:59Z is after 2026-02-23T05:33:13Z Feb 23 13:07:59 crc kubenswrapper[4851]: I0223 13:07:59.928017 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:02:59.004313112 +0000 UTC Feb 23 13:08:00 crc kubenswrapper[4851]: I0223 13:08:00.538162 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:08:00 crc kubenswrapper[4851]: I0223 13:08:00.538319 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:00 crc kubenswrapper[4851]: I0223 13:08:00.539796 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:00 crc kubenswrapper[4851]: I0223 13:08:00.539843 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:00 crc kubenswrapper[4851]: I0223 13:08:00.539853 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:00 crc kubenswrapper[4851]: E0223 13:08:00.844510 4851 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:00Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896e20c4e9a8ddc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,LastTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 13:08:00 crc kubenswrapper[4851]: I0223 13:08:00.906907 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:00Z is after 2026-02-23T05:33:13Z Feb 23 13:08:00 crc kubenswrapper[4851]: I0223 13:08:00.928402 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:58:16.610733012 +0000 UTC Feb 23 13:08:01 crc kubenswrapper[4851]: E0223 13:08:01.243482 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:01Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 13:08:01 crc kubenswrapper[4851]: I0223 13:08:01.246583 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:01 crc kubenswrapper[4851]: I0223 13:08:01.248103 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:01 crc kubenswrapper[4851]: I0223 13:08:01.248127 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:01 crc kubenswrapper[4851]: I0223 13:08:01.248136 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:01 crc kubenswrapper[4851]: I0223 13:08:01.248154 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:08:01 crc kubenswrapper[4851]: E0223 13:08:01.250438 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:01Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 13:08:01 crc kubenswrapper[4851]: I0223 13:08:01.906877 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:01Z is after 2026-02-23T05:33:13Z Feb 23 13:08:01 crc kubenswrapper[4851]: I0223 13:08:01.929382 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:59:32.921912198 +0000 UTC Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.047007 4851 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.047100 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.584104 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.584395 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.585775 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.585825 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.585835 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.586441 4851 scope.go:117] "RemoveContainer" containerID="f2d739d71b9a0278fc7699ce6bc7a08c4d28c01479a43689cee6e48144a0572e" Feb 23 13:08:02 crc kubenswrapper[4851]: E0223 13:08:02.586631 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.908134 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:02Z is after 2026-02-23T05:33:13Z Feb 23 13:08:02 crc kubenswrapper[4851]: I0223 13:08:02.929676 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:45:20.493654572 +0000 UTC Feb 23 13:08:03 crc kubenswrapper[4851]: I0223 13:08:03.907607 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:03Z is after 2026-02-23T05:33:13Z Feb 23 13:08:03 crc kubenswrapper[4851]: I0223 13:08:03.930260 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 09:20:36.812506403 +0000 UTC Feb 23 13:08:04 crc kubenswrapper[4851]: I0223 13:08:04.906434 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:04Z is after 2026-02-23T05:33:13Z Feb 23 13:08:04 crc kubenswrapper[4851]: I0223 13:08:04.930958 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:54:41.740967912 +0000 UTC Feb 23 13:08:05 crc kubenswrapper[4851]: W0223 13:08:05.180623 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:05Z is after 2026-02-23T05:33:13Z Feb 23 13:08:05 crc kubenswrapper[4851]: E0223 13:08:05.180727 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:08:05 crc kubenswrapper[4851]: I0223 13:08:05.934764 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:03:34.560448007 +0000 UTC Feb 23 13:08:05 crc kubenswrapper[4851]: I0223 13:08:05.937595 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:05Z is after 2026-02-23T05:33:13Z Feb 23 13:08:06 crc kubenswrapper[4851]: E0223 13:08:06.035486 4851 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 13:08:06 crc kubenswrapper[4851]: I0223 13:08:06.110565 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 13:08:06 crc kubenswrapper[4851]: E0223 13:08:06.114032 4851 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:08:06 crc kubenswrapper[4851]: E0223 13:08:06.115238 4851 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 23 13:08:06 crc kubenswrapper[4851]: I0223 13:08:06.908462 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:06Z is after 2026-02-23T05:33:13Z Feb 23 13:08:06 crc kubenswrapper[4851]: I0223 13:08:06.935073 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 19:46:07.53114312 +0000 UTC Feb 23 13:08:07 crc kubenswrapper[4851]: W0223 13:08:07.900133 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:07Z is after 2026-02-23T05:33:13Z Feb 23 13:08:07 crc kubenswrapper[4851]: E0223 13:08:07.900235 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:07Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:08:07 crc kubenswrapper[4851]: I0223 13:08:07.907006 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:07Z is after 2026-02-23T05:33:13Z Feb 23 13:08:07 crc kubenswrapper[4851]: I0223 13:08:07.935417 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:53:57.893573401 +0000 UTC Feb 23 13:08:08 crc kubenswrapper[4851]: W0223 13:08:08.199524 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:08Z is after 2026-02-23T05:33:13Z Feb 23 13:08:08 crc kubenswrapper[4851]: E0223 13:08:08.199619 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:08Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:08:08 crc kubenswrapper[4851]: E0223 13:08:08.248926 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:08Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 13:08:08 crc kubenswrapper[4851]: I0223 13:08:08.251023 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:08 crc kubenswrapper[4851]: I0223 13:08:08.252369 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:08 crc kubenswrapper[4851]: I0223 13:08:08.252456 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:08 crc kubenswrapper[4851]: I0223 13:08:08.252472 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:08 crc kubenswrapper[4851]: I0223 13:08:08.252497 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:08:08 crc kubenswrapper[4851]: E0223 13:08:08.256160 4851 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:08Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 13:08:08 crc kubenswrapper[4851]: I0223 13:08:08.906919 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:08Z is after 2026-02-23T05:33:13Z Feb 23 13:08:08 crc kubenswrapper[4851]: I0223 13:08:08.936495 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:55:29.10889904 +0000 UTC Feb 23 13:08:09 crc kubenswrapper[4851]: I0223 13:08:09.907518 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:09Z is after 2026-02-23T05:33:13Z Feb 23 13:08:09 crc kubenswrapper[4851]: I0223 13:08:09.936817 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 12:31:48.934468544 +0000 UTC Feb 23 13:08:10 crc kubenswrapper[4851]: W0223 13:08:10.166531 4851 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:10Z is after 2026-02-23T05:33:13Z Feb 23 13:08:10 crc kubenswrapper[4851]: E0223 13:08:10.166610 4851 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 13:08:10 crc kubenswrapper[4851]: E0223 13:08:10.849357 4851 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:10Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896e20c4e9a8ddc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,LastTimestamp:2026-02-23 13:07:25.9031055 +0000 UTC m=+0.584809188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 13:08:10 crc kubenswrapper[4851]: I0223 13:08:10.906805 4851 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:10Z is after 2026-02-23T05:33:13Z Feb 23 13:08:10 crc kubenswrapper[4851]: I0223 13:08:10.937204 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:52:28.860540936 +0000 UTC Feb 23 13:08:11 crc kubenswrapper[4851]: I0223 13:08:11.590247 4851 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 13:08:11 crc kubenswrapper[4851]: I0223 13:08:11.938055 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 09:07:56.789367119 +0000 UTC Feb 23 13:08:12 crc kubenswrapper[4851]: I0223 13:08:12.047257 4851 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:08:12 crc kubenswrapper[4851]: I0223 13:08:12.047322 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:08:12 crc kubenswrapper[4851]: I0223 13:08:12.938466 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:12:42.202140577 +0000 UTC Feb 23 13:08:13 crc kubenswrapper[4851]: I0223 13:08:13.939121 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:49:51.8357859 +0000 UTC Feb 23 13:08:14 crc kubenswrapper[4851]: I0223 13:08:14.939936 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:39:36.343766304 +0000 UTC Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.257174 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.258482 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.258533 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.258551 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.258689 4851 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.266963 4851 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.267311 4851 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.267359 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.270938 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.271003 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.271015 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.271033 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.271044 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:15Z","lastTransitionTime":"2026-02-23T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.285673 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.293811 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.293901 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.293922 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.293944 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.293961 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:15Z","lastTransitionTime":"2026-02-23T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.305986 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.313232 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.313287 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.313305 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.313347 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.313366 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:15Z","lastTransitionTime":"2026-02-23T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.324012 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.331461 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.331507 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.331526 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.331547 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.331562 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:15Z","lastTransitionTime":"2026-02-23T13:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.340813 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.340972 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.341003 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.441898 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.542350 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.643353 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.743572 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.844397 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:15 crc kubenswrapper[4851]: I0223 13:08:15.940609 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:59:34.68834661 +0000 UTC Feb 23 13:08:15 crc kubenswrapper[4851]: E0223 13:08:15.944854 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.036646 4851 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.045056 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.145703 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.246219 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.346617 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.447727 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.548609 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.649613 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.750612 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.851127 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: I0223 13:08:16.941403 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:45:56.775795171 +0000 UTC Feb 23 13:08:16 crc kubenswrapper[4851]: E0223 13:08:16.951689 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:16 crc kubenswrapper[4851]: I0223 13:08:16.968235 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:16 crc kubenswrapper[4851]: I0223 13:08:16.969893 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:16 crc kubenswrapper[4851]: I0223 13:08:16.969946 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:16 crc kubenswrapper[4851]: I0223 13:08:16.969963 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:16 crc kubenswrapper[4851]: I0223 13:08:16.970921 4851 scope.go:117] "RemoveContainer" containerID="f2d739d71b9a0278fc7699ce6bc7a08c4d28c01479a43689cee6e48144a0572e" Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.052230 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.152462 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:17 crc kubenswrapper[4851]: I0223 13:08:17.196911 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 13:08:17 crc kubenswrapper[4851]: I0223 13:08:17.198449 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1"} Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.252549 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.353116 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.453804 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.554684 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.655057 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.755758 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:17 crc kubenswrapper[4851]: I0223 13:08:17.762926 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 13:08:17 crc kubenswrapper[4851]: I0223 13:08:17.763064 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:17 crc kubenswrapper[4851]: I0223 13:08:17.764213 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:17 crc kubenswrapper[4851]: I0223 13:08:17.764246 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:17 crc kubenswrapper[4851]: I0223 13:08:17.764255 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.856680 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:17 crc kubenswrapper[4851]: I0223 13:08:17.942406 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:06:26.820621329 +0000 UTC Feb 23 13:08:17 crc kubenswrapper[4851]: E0223 13:08:17.957521 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.058506 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.159432 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.202121 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.202566 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.204584 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" exitCode=255 Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.204625 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1"} Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.204657 4851 scope.go:117] "RemoveContainer" containerID="f2d739d71b9a0278fc7699ce6bc7a08c4d28c01479a43689cee6e48144a0572e" Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.204790 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.205501 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.205521 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.205531 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.205975 4851 scope.go:117] "RemoveContainer" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.206109 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.260086 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.361041 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.461303 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.561901 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.662609 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.763289 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.864447 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:18 crc kubenswrapper[4851]: I0223 13:08:18.943395 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:36:55.804057305 +0000 UTC Feb 23 13:08:18 crc kubenswrapper[4851]: E0223 13:08:18.964910 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.065704 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.166610 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: I0223 13:08:19.209321 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 13:08:19 crc kubenswrapper[4851]: I0223 13:08:19.212046 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:19 crc kubenswrapper[4851]: I0223 13:08:19.213114 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:19 crc kubenswrapper[4851]: I0223 13:08:19.213157 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:19 crc kubenswrapper[4851]: I0223 13:08:19.213179 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:19 crc kubenswrapper[4851]: I0223 13:08:19.214162 4851 scope.go:117] "RemoveContainer" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.214478 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.267176 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.368036 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.468863 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: I0223 13:08:19.567581 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.569733 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.670682 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.770883 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.871685 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:19 crc kubenswrapper[4851]: I0223 13:08:19.944275 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:56:22.913388156 +0000 UTC Feb 23 13:08:19 crc kubenswrapper[4851]: E0223 13:08:19.972191 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.072452 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.173381 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.215119 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.216123 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.216188 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.216207 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.216952 4851 scope.go:117] "RemoveContainer" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.217149 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.274022 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.375136 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.475683 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.576551 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.677062 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.774913 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.775058 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.776686 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.776746 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.776762 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.777353 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.779478 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.877788 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:20 crc kubenswrapper[4851]: I0223 13:08:20.944840 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 14:12:34.375796217 +0000 UTC Feb 23 13:08:20 crc kubenswrapper[4851]: E0223 13:08:20.978014 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.078532 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.179301 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: I0223 13:08:21.217151 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:21 crc kubenswrapper[4851]: I0223 13:08:21.218050 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:21 crc kubenswrapper[4851]: I0223 13:08:21.218091 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:21 crc kubenswrapper[4851]: I0223 13:08:21.218103 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.279916 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.381015 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.482377 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.583632 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.684222 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.784953 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.885786 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:21 crc kubenswrapper[4851]: I0223 13:08:21.945379 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:57:38.662311273 +0000 UTC Feb 23 13:08:21 crc kubenswrapper[4851]: E0223 13:08:21.986891 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.087693 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.188247 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.289571 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.390471 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.491554 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: I0223 13:08:22.584143 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:08:22 crc kubenswrapper[4851]: I0223 13:08:22.584584 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:22 crc kubenswrapper[4851]: I0223 13:08:22.585904 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:22 crc kubenswrapper[4851]: I0223 13:08:22.586004 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:22 crc kubenswrapper[4851]: I0223 13:08:22.586097 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:22 crc kubenswrapper[4851]: I0223 13:08:22.586713 4851 scope.go:117] "RemoveContainer" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.586933 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.592661 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.693234 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.793604 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.894418 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:22 crc kubenswrapper[4851]: I0223 13:08:22.945791 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:47:52.016939834 +0000 UTC Feb 23 13:08:22 crc kubenswrapper[4851]: E0223 13:08:22.995352 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: E0223 13:08:23.096390 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: E0223 13:08:23.196988 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: E0223 13:08:23.298313 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: E0223 13:08:23.398910 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: E0223 13:08:23.499389 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: E0223 13:08:23.600101 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: E0223 13:08:23.701030 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: E0223 13:08:23.801857 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: E0223 13:08:23.902746 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:23 crc kubenswrapper[4851]: I0223 13:08:23.946045 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:45:18.407099974 +0000 UTC Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.003776 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.104845 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.205633 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.306166 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.407012 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.507708 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.608279 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.709238 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.810420 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: E0223 13:08:24.911211 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:24 crc kubenswrapper[4851]: I0223 13:08:24.946411 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:10:58.706357639 +0000 UTC Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.011609 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.112278 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.212563 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.313377 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.395792 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.399936 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.400015 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.400042 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.400068 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.400086 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:25Z","lastTransitionTime":"2026-02-23T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.414723 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.422488 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.422525 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.422533 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.422549 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.422560 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:25Z","lastTransitionTime":"2026-02-23T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.479231 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.488220 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.488261 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.488273 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.488288 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.488301 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:25Z","lastTransitionTime":"2026-02-23T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.498952 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.508161 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.508201 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.508212 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.508228 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.508238 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:25Z","lastTransitionTime":"2026-02-23T13:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.517204 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.517380 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.517399 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.618521 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.719206 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.819724 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: E0223 13:08:25.919866 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:25 crc kubenswrapper[4851]: I0223 13:08:25.947319 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:50:35.642704769 +0000 UTC Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.020102 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.037024 4851 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.120593 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.221312 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.321777 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.422533 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.523168 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.623831 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.724848 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.825072 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: E0223 13:08:26.925874 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:26 crc kubenswrapper[4851]: I0223 13:08:26.948047 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:55:52.646376582 +0000 UTC Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.026462 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.127216 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.227680 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.327982 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.428796 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.529646 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.630304 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.731397 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.832099 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: E0223 13:08:27.932947 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:27 crc kubenswrapper[4851]: I0223 13:08:27.949216 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:02:01.694695188 +0000 UTC Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.033103 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.134222 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.234354 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.334980 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.435911 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.536778 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.637799 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.738396 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.839175 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: E0223 13:08:28.939689 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:28 crc kubenswrapper[4851]: I0223 13:08:28.949939 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:07:45.905128945 +0000 UTC Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.040209 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.140560 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.241388 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.341517 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.442645 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.543610 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.644819 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.745791 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.845983 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: E0223 13:08:29.946961 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:29 crc kubenswrapper[4851]: I0223 13:08:29.950111 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:07:30.062090872 +0000 UTC Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.047562 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.147693 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.248159 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.349209 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.450287 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.551075 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.651672 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.752596 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.853650 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:30 crc kubenswrapper[4851]: I0223 13:08:30.950436 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:06:27.860266395 +0000 UTC Feb 23 13:08:30 crc kubenswrapper[4851]: E0223 13:08:30.954661 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.055485 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.155916 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.256387 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.356488 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.457447 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.558101 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.658590 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.759176 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.860441 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:31 crc kubenswrapper[4851]: I0223 13:08:31.951613 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:08:48.10966057 +0000 UTC Feb 23 13:08:31 crc kubenswrapper[4851]: E0223 13:08:31.960797 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.061764 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.162618 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.263262 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.364184 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.464287 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.565121 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.666258 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.767089 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.868145 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:32 crc kubenswrapper[4851]: I0223 13:08:32.952108 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 22:21:30.33535777 +0000 UTC Feb 23 13:08:32 crc kubenswrapper[4851]: E0223 13:08:32.968414 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.068742 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.169731 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.269817 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.370261 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.470841 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.571880 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.672640 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.773628 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.874179 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:33 crc kubenswrapper[4851]: I0223 13:08:33.952607 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:07:57.395964101 +0000 UTC Feb 23 13:08:33 crc kubenswrapper[4851]: E0223 13:08:33.975180 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.075708 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.176640 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.277725 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.378710 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.479754 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.580534 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.681279 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.782164 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.882275 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:34 crc kubenswrapper[4851]: I0223 13:08:34.953274 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 07:08:28.725371603 +0000 UTC Feb 23 13:08:34 crc kubenswrapper[4851]: E0223 13:08:34.982733 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.083627 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.184457 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.285396 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.385633 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.486465 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.586805 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.687690 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.736150 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.741218 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.741279 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.741296 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.741324 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.741370 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:35Z","lastTransitionTime":"2026-02-23T13:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.757780 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.765296 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.765361 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.765375 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.765394 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.765408 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:35Z","lastTransitionTime":"2026-02-23T13:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.775936 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.784466 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.784519 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.784528 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.784543 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.784552 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:35Z","lastTransitionTime":"2026-02-23T13:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.793783 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.799610 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.799654 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.799665 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.799678 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.799688 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:35Z","lastTransitionTime":"2026-02-23T13:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.809397 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.809557 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.809598 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.910190 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.954074 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 21:44:26.739771243 +0000 UTC Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.968531 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.968531 4851 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.969570 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.969596 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.969604 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.969748 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.969781 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.969791 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:35 crc kubenswrapper[4851]: I0223 13:08:35.970443 4851 scope.go:117] "RemoveContainer" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" Feb 23 13:08:35 crc kubenswrapper[4851]: E0223 13:08:35.970604 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.010418 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.037875 4851 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.110857 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.211202 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.311357 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.412205 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.512931 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.613666 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.714425 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.814850 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: E0223 13:08:36.915535 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:36 crc kubenswrapper[4851]: I0223 13:08:36.954806 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:40:17.209017254 +0000 UTC Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.016447 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.117188 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.217460 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.317567 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.417744 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.518774 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.619498 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.720272 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.820912 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: E0223 13:08:37.921770 4851 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 13:08:37 crc kubenswrapper[4851]: I0223 13:08:37.954982 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:20:09.655422622 +0000 UTC Feb 23 13:08:37 crc kubenswrapper[4851]: I0223 13:08:37.976940 4851 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.024636 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.024668 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.024676 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.024690 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.024700 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.116399 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.128621 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.128645 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.128654 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.128667 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.128676 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.130174 4851 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.148541 4851 csr.go:261] certificate signing request csr-8d6p2 is approved, waiting to be issued Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.157131 4851 csr.go:257] certificate signing request csr-8d6p2 is issued Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.232200 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.232245 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.232252 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.232266 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.232275 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.334242 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.334291 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.334302 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.334363 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.334378 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.437558 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.437617 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.437631 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.437649 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.437662 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.539924 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.539969 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.539979 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.539993 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.540001 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.642294 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.642349 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.642359 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.642374 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.642386 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.746081 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.746136 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.746145 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.746163 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.746173 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.848587 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.848642 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.848652 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.848666 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.848675 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.950689 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.950722 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.950730 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.950742 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.950753 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:38Z","lastTransitionTime":"2026-02-23T13:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.955263 4851 apiserver.go:52] "Watching apiserver" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.955272 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:07:04.468798285 +0000 UTC Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.959450 4851 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.959859 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.960261 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.960521 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.960544 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.960653 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:38 crc kubenswrapper[4851]: E0223 13:08:38.960585 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:38 crc kubenswrapper[4851]: E0223 13:08:38.960734 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.960770 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.960960 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:38 crc kubenswrapper[4851]: E0223 13:08:38.961186 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.966024 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.966185 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.966320 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.966436 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.966518 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.966606 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.966777 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.966787 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.969233 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:08:38 crc kubenswrapper[4851]: I0223 13:08:38.990230 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.001509 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.010261 4851 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.011986 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.021475 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.031506 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.040915 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.050355 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.053375 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.053445 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.053464 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.053486 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.053506 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.083943 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.084078 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.084240 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.084374 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.084522 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.084634 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.084754 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.084862 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.084973 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.085144 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.085248 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.085203 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.085404 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.085382 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086068 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086200 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.085524 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086261 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086271 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086321 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.085451 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086457 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086496 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086523 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086546 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086554 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086611 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086643 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086666 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086692 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086716 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086745 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086776 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086804 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086828 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086850 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086873 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086876 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086895 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086919 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086955 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086976 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086999 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087021 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087042 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087068 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087089 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087105 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087127 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087150 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087171 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087192 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087215 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087236 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087254 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087275 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087301 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087322 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087370 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087400 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087424 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087444 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087463 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087511 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087532 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087646 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087665 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087685 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087708 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087840 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087865 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087886 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087905 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087925 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088021 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088041 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088061 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088081 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088188 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088207 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088227 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088249 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088266 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088380 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088406 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088427 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088445 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088467 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088488 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088552 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088701 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088764 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088789 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088818 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088960 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088981 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089000 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089021 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089042 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089185 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089213 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089233 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089251 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089273 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089387 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089411 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089433 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089456 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089477 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089625 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089651 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089672 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089692 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089712 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089862 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089885 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089906 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089933 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089959 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090100 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090122 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090145 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090163 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090186 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090258 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090280 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090298 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090319 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090363 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090427 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090449 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090470 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090489 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090512 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090531 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090634 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090652 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090673 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090693 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090712 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090828 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090855 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090873 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090895 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090917 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091039 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091069 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091099 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091128 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091151 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091180 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091215 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091243 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091269 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091299 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091342 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091369 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091398 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086994 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.086978 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087099 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087248 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087446 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091583 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087726 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.087783 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091650 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088190 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088359 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088583 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088613 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.088780 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089027 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089069 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089079 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089292 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089493 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089820 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.089954 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090012 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090076 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090305 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.090770 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091461 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.091935 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:08:39.591908959 +0000 UTC m=+74.273612637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091939 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.092065 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.092482 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.092891 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.092982 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.093228 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.093310 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.093468 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.093384 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.093593 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.093846 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.093874 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094240 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094241 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094402 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094428 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.091742 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094454 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094509 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094527 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094540 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094692 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094738 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094768 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094804 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094835 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094864 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094897 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094931 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094959 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094984 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.094991 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095013 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095041 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095064 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095090 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095119 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095143 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095152 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095184 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095215 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095247 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095817 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095882 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095920 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095942 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095964 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095993 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096015 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096034 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096054 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096075 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096093 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096113 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096136 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096154 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096174 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096198 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096218 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096235 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096255 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096275 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096294 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096312 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096363 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096396 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096417 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096436 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096486 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096511 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096536 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096556 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096577 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096597 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096617 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096637 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096659 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096684 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096703 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096749 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096769 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096794 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096888 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096901 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096912 4851 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096922 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096935 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096945 4851 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096955 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096964 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.096986 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097000 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097013 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097026 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097046 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097063 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097078 4851 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097095 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097105 4851 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097116 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097126 4851 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097141 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097151 4851 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097161 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097171 4851 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097186 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097198 4851 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097211 4851 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097225 4851 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097236 4851 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097250 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097267 4851 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097284 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097296 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097308 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097318 4851 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097353 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097364 4851 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097375 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097390 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097407 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097416 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097426 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097439 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097449 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097458 4851 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097467 4851 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097479 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097488 4851 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097497 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097507 4851 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097519 4851 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097530 4851 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097539 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097548 4851 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101507 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095205 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095265 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095521 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095564 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095589 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095601 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.095770 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.097635 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.097781 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.098158 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.098896 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.099053 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.099284 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.099415 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.099482 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.099186 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.099542 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.099674 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.099982 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.100051 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.100150 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.100210 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.100314 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.100393 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.100508 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.100660 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.100673 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101042 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101155 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101314 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101545 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101597 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101611 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101705 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101073 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.101906 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.102084 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.103056 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:39.60303321 +0000 UTC m=+74.284736888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102169 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102136 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102197 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102403 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102421 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102562 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.103245 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:39.603223045 +0000 UTC m=+74.284926713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102480 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102667 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102674 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.103486 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.103762 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.103800 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.103868 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.104098 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.104139 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.104379 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.104421 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.104449 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.104752 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.105087 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.104987 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.105279 4851 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.106668 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.107477 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.107506 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.107545 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.108136 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.108538 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.108732 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.109921 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.111295 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.113493 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.113636 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.113631 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.114262 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.114380 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.114488 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.114929 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.115894 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.116210 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.116311 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.116414 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.116582 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:39.616553391 +0000 UTC m=+74.298257069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.118175 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.118810 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.122571 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.102241 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.130838 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.131148 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.131450 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.131714 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.131891 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.132036 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.132203 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.132419 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.132301 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.132605 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.132647 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.133041 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.133047 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.133147 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.133384 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.133474 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.133716 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.133775 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.133915 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.134208 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.134315 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.134460 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.134555 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.134802 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.134861 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.135149 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.135186 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.135270 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.135513 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.135580 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.135602 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.135788 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.136306 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.136414 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.136892 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.136985 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.137054 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.137068 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.137608 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.137804 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.137831 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.137990 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.138128 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.138436 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.138455 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.138839 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.138882 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.139502 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.139597 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.140390 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.140676 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.140776 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.140989 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.141004 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.141653 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.142093 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.142809 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.143829 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.145277 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.145667 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.146139 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.146757 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.146755 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.149403 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.153509 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.153609 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.153673 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.153781 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:39.653762216 +0000 UTC m=+74.335465894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.155934 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.156872 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.157092 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.157120 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.157130 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.157144 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.157153 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.157802 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.159669 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-23 13:03:38 +0000 UTC, rotation deadline is 2026-12-17 03:48:00.782064525 +0000 UTC Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.159690 4851 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7118h39m21.622376263s for next certificate rotation Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198450 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198526 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198581 4851 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198596 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198608 4851 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198619 4851 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198633 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198645 4851 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198656 4851 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198667 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198678 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198697 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198690 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198737 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198751 4851 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198763 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198774 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198785 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198671 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198795 4851 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198841 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198856 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198869 4851 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198883 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198895 4851 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198907 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198920 4851 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198934 4851 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198947 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198960 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198975 4851 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198987 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.198998 4851 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199009 4851 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199021 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199033 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199046 4851 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199061 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199077 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199091 4851 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199104 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199115 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199127 4851 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199139 4851 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199150 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199163 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199175 4851 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199187 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199199 4851 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199210 4851 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199222 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199235 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199246 4851 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199257 4851 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199269 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199281 4851 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199294 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199306 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199318 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199348 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199360 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199373 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199392 4851 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199405 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199418 4851 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199429 4851 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199439 4851 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199451 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199464 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199475 4851 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199488 4851 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199501 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199513 4851 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199525 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199537 4851 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199549 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199576 4851 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199587 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199598 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199611 4851 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199623 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199636 4851 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199648 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199660 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199673 4851 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199685 4851 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199697 4851 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199709 4851 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199722 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199734 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199746 4851 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199758 4851 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199771 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199783 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199795 4851 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199806 4851 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199818 4851 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199829 4851 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199841 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199854 4851 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199865 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199877 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199889 4851 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199902 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199913 4851 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199924 4851 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199936 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199947 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199959 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199970 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199982 4851 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.199993 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200004 4851 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200017 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200028 4851 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200039 4851 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200052 4851 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200064 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200077 4851 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200089 4851 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200101 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200113 4851 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200124 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200135 4851 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200147 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200158 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200170 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200181 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200192 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200203 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200217 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200228 4851 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200242 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200252 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200263 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200274 4851 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200285 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200297 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200309 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200321 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200349 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200363 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200375 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200387 4851 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.200399 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.259131 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.259166 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.259186 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.259404 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.259421 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.279668 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.287626 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.295659 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.298154 4851 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 13:08:39 crc kubenswrapper[4851]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 23 13:08:39 crc kubenswrapper[4851]: set -o allexport Feb 23 13:08:39 crc kubenswrapper[4851]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 23 13:08:39 crc kubenswrapper[4851]: source /etc/kubernetes/apiserver-url.env Feb 23 13:08:39 crc kubenswrapper[4851]: else Feb 23 13:08:39 crc kubenswrapper[4851]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 23 13:08:39 crc kubenswrapper[4851]: exit 1 Feb 23 13:08:39 crc kubenswrapper[4851]: fi Feb 23 13:08:39 crc kubenswrapper[4851]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 23 13:08:39 crc kubenswrapper[4851]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 13:08:39 crc kubenswrapper[4851]: > logger="UnhandledError" Feb 23 13:08:39 crc kubenswrapper[4851]: W0223 13:08:39.298242 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-29c4d8ec7f1df4ef2a9bcbe382e2c454f3b003981333944b07f86a14b81a1fca WatchSource:0}: Error finding container 29c4d8ec7f1df4ef2a9bcbe382e2c454f3b003981333944b07f86a14b81a1fca: Status 404 returned error can't find the container with id 29c4d8ec7f1df4ef2a9bcbe382e2c454f3b003981333944b07f86a14b81a1fca Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.299309 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.300994 4851 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 13:08:39 crc kubenswrapper[4851]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 13:08:39 crc kubenswrapper[4851]: if [[ -f "/env/_master" ]]; then Feb 23 13:08:39 crc kubenswrapper[4851]: set -o allexport Feb 23 13:08:39 crc kubenswrapper[4851]: source "/env/_master" Feb 23 13:08:39 crc kubenswrapper[4851]: set +o allexport Feb 23 13:08:39 crc kubenswrapper[4851]: fi Feb 23 13:08:39 crc kubenswrapper[4851]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 23 13:08:39 crc kubenswrapper[4851]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 23 13:08:39 crc kubenswrapper[4851]: ho_enable="--enable-hybrid-overlay" Feb 23 13:08:39 crc kubenswrapper[4851]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 23 13:08:39 crc kubenswrapper[4851]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 23 13:08:39 crc kubenswrapper[4851]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 23 13:08:39 crc kubenswrapper[4851]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 13:08:39 crc kubenswrapper[4851]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 23 13:08:39 crc kubenswrapper[4851]: --webhook-host=127.0.0.1 \ Feb 23 13:08:39 crc kubenswrapper[4851]: --webhook-port=9743 \ Feb 23 13:08:39 crc kubenswrapper[4851]: ${ho_enable} \ Feb 23 13:08:39 crc kubenswrapper[4851]: --enable-interconnect \ Feb 23 13:08:39 crc kubenswrapper[4851]: --disable-approver \ Feb 23 13:08:39 crc kubenswrapper[4851]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 23 13:08:39 crc kubenswrapper[4851]: --wait-for-kubernetes-api=200s \ Feb 23 13:08:39 crc kubenswrapper[4851]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 23 13:08:39 crc kubenswrapper[4851]: --loglevel="${LOGLEVEL}" Feb 23 13:08:39 crc kubenswrapper[4851]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 13:08:39 crc kubenswrapper[4851]: > logger="UnhandledError" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.302889 4851 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 13:08:39 crc kubenswrapper[4851]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 13:08:39 crc kubenswrapper[4851]: if [[ -f "/env/_master" ]]; then Feb 23 13:08:39 crc kubenswrapper[4851]: set -o allexport Feb 23 13:08:39 crc kubenswrapper[4851]: source "/env/_master" Feb 23 13:08:39 crc kubenswrapper[4851]: set +o allexport Feb 23 13:08:39 crc kubenswrapper[4851]: fi Feb 23 13:08:39 crc kubenswrapper[4851]: Feb 23 13:08:39 crc kubenswrapper[4851]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 23 13:08:39 crc kubenswrapper[4851]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 13:08:39 crc kubenswrapper[4851]: --disable-webhook \ Feb 23 13:08:39 crc kubenswrapper[4851]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 23 13:08:39 crc kubenswrapper[4851]: --loglevel="${LOGLEVEL}" Feb 23 13:08:39 crc kubenswrapper[4851]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 13:08:39 crc kubenswrapper[4851]: > logger="UnhandledError" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.304251 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 23 13:08:39 crc kubenswrapper[4851]: W0223 13:08:39.308497 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-cbf1d6795b8d5b8fa6b5e4e602ea71b6f23057a5272d0e1b4050afa315403f97 WatchSource:0}: Error finding container cbf1d6795b8d5b8fa6b5e4e602ea71b6f23057a5272d0e1b4050afa315403f97: Status 404 returned error can't find the container with id cbf1d6795b8d5b8fa6b5e4e602ea71b6f23057a5272d0e1b4050afa315403f97 Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.311093 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.312411 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.361482 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.361546 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.361556 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.361571 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.361594 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.464145 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.464201 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.464221 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.464245 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.464265 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.566774 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.567080 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.567097 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.567122 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.567139 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.603291 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.603410 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.603479 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.603524 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:08:40.603494342 +0000 UTC m=+75.285198060 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.603569 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.603613 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.603645 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:40.603626796 +0000 UTC m=+75.285330474 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.603678 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:40.603662427 +0000 UTC m=+75.285366285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.669767 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.669805 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.669817 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.669832 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.669841 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.704466 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.704549 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.704694 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.704727 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.704739 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.704808 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.704851 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.704874 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.704857 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:40.704841312 +0000 UTC m=+75.386544990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:39 crc kubenswrapper[4851]: E0223 13:08:39.704992 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:40.704957846 +0000 UTC m=+75.386661524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.771949 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.771985 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.771995 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.772007 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.772016 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.788981 4851 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.875084 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.875128 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.875139 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.875155 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.875167 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.955500 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:48:51.910627261 +0000 UTC Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.975289 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.976788 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.977266 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.977316 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.977353 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.977370 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.977381 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:39Z","lastTransitionTime":"2026-02-23T13:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.978956 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.980184 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.982562 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.983798 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.985251 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.987170 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.988548 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.990045 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.990593 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.992092 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.992625 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.993161 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.994143 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.994771 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.995809 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.996170 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.996814 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.997969 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.998530 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 23 13:08:39 crc kubenswrapper[4851]: I0223 13:08:39.999623 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.000029 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.001061 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.001517 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.002094 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.003230 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.003768 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.004676 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.005187 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.006243 4851 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.006384 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.008037 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.008931 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.009502 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.010900 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.011525 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.012413 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.013099 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.014078 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.014573 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.015551 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.016137 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.017065 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.017504 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.018349 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.018880 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.020008 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.020496 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.021267 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.021731 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.022873 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.024156 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.025153 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.079794 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.079837 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.079849 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.079865 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.079875 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:40Z","lastTransitionTime":"2026-02-23T13:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.183431 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.183460 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.183470 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.183531 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.183543 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:40Z","lastTransitionTime":"2026-02-23T13:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.263880 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cbf1d6795b8d5b8fa6b5e4e602ea71b6f23057a5272d0e1b4050afa315403f97"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.266233 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"29c4d8ec7f1df4ef2a9bcbe382e2c454f3b003981333944b07f86a14b81a1fca"} Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.267232 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.267830 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8d0efb68b673e6683bd15681e9471fb4b63662c6f76fe6812e77dd41b2bccb76"} Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.268455 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.269129 4851 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 13:08:40 crc kubenswrapper[4851]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 13:08:40 crc kubenswrapper[4851]: if [[ -f "/env/_master" ]]; then Feb 23 13:08:40 crc kubenswrapper[4851]: set -o allexport Feb 23 13:08:40 crc kubenswrapper[4851]: source "/env/_master" Feb 23 13:08:40 crc kubenswrapper[4851]: set +o allexport Feb 23 13:08:40 crc kubenswrapper[4851]: fi Feb 23 13:08:40 crc kubenswrapper[4851]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 23 13:08:40 crc kubenswrapper[4851]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 23 13:08:40 crc kubenswrapper[4851]: ho_enable="--enable-hybrid-overlay" Feb 23 13:08:40 crc kubenswrapper[4851]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 23 13:08:40 crc kubenswrapper[4851]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 23 13:08:40 crc kubenswrapper[4851]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 23 13:08:40 crc kubenswrapper[4851]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 13:08:40 crc kubenswrapper[4851]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 23 13:08:40 crc kubenswrapper[4851]: --webhook-host=127.0.0.1 \ Feb 23 13:08:40 crc kubenswrapper[4851]: --webhook-port=9743 \ Feb 23 13:08:40 crc kubenswrapper[4851]: ${ho_enable} \ Feb 23 13:08:40 crc kubenswrapper[4851]: --enable-interconnect \ Feb 23 13:08:40 crc kubenswrapper[4851]: --disable-approver \ Feb 23 13:08:40 crc kubenswrapper[4851]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 23 13:08:40 crc kubenswrapper[4851]: --wait-for-kubernetes-api=200s \ Feb 23 13:08:40 crc kubenswrapper[4851]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 23 13:08:40 crc kubenswrapper[4851]: --loglevel="${LOGLEVEL}" Feb 23 13:08:40 crc kubenswrapper[4851]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 13:08:40 crc kubenswrapper[4851]: > logger="UnhandledError" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.269189 4851 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 13:08:40 crc kubenswrapper[4851]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 23 13:08:40 crc kubenswrapper[4851]: set -o allexport Feb 23 13:08:40 crc kubenswrapper[4851]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 23 13:08:40 crc kubenswrapper[4851]: source /etc/kubernetes/apiserver-url.env Feb 23 13:08:40 crc kubenswrapper[4851]: else Feb 23 13:08:40 crc kubenswrapper[4851]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 23 13:08:40 crc kubenswrapper[4851]: exit 1 Feb 23 13:08:40 crc kubenswrapper[4851]: fi Feb 23 13:08:40 crc kubenswrapper[4851]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 23 13:08:40 crc kubenswrapper[4851]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 13:08:40 crc kubenswrapper[4851]: > logger="UnhandledError" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.270817 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.271549 4851 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 13:08:40 crc kubenswrapper[4851]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 13:08:40 crc kubenswrapper[4851]: if [[ -f "/env/_master" ]]; then Feb 23 13:08:40 crc kubenswrapper[4851]: set -o allexport Feb 23 13:08:40 crc kubenswrapper[4851]: source "/env/_master" Feb 23 13:08:40 crc kubenswrapper[4851]: set +o allexport Feb 23 13:08:40 crc kubenswrapper[4851]: fi Feb 23 13:08:40 crc kubenswrapper[4851]: Feb 23 13:08:40 crc kubenswrapper[4851]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 23 13:08:40 crc kubenswrapper[4851]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 13:08:40 crc kubenswrapper[4851]: --disable-webhook \ Feb 23 13:08:40 crc kubenswrapper[4851]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 23 13:08:40 crc kubenswrapper[4851]: --loglevel="${LOGLEVEL}" Feb 23 13:08:40 crc kubenswrapper[4851]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 13:08:40 crc kubenswrapper[4851]: > logger="UnhandledError" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.273357 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.282369 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.286612 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.286644 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.286652 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.286664 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.286675 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:40Z","lastTransitionTime":"2026-02-23T13:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.291682 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.304713 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.313211 4851 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.315850 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.325562 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.339719 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.350460 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.362318 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.371769 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.382421 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.388539 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.388580 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.388593 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.388610 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.388623 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:40Z","lastTransitionTime":"2026-02-23T13:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.394932 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.403740 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.491766 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.491834 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.491854 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.491878 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.491898 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:40Z","lastTransitionTime":"2026-02-23T13:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.594785 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.594828 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.594837 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.594853 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.594865 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:40Z","lastTransitionTime":"2026-02-23T13:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.612242 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.612370 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:08:42.612355373 +0000 UTC m=+77.294059051 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.612495 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.612531 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:42.612523758 +0000 UTC m=+77.294227436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.612818 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.612884 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.612967 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.613005 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:42.612997282 +0000 UTC m=+77.294700960 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.697667 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.697723 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.697733 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.697747 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.697757 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:40Z","lastTransitionTime":"2026-02-23T13:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.713428 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.713508 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.713641 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.713661 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.713674 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.713724 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:42.713708822 +0000 UTC m=+77.395412510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.714135 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.714151 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.714161 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.714194 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:42.714183117 +0000 UTC m=+77.395886805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.801123 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.801160 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.801171 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.801188 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.801199 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:40Z","lastTransitionTime":"2026-02-23T13:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.904526 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.904582 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.904600 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.904622 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.904640 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:40Z","lastTransitionTime":"2026-02-23T13:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.956140 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:01:26.276783556 +0000 UTC Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.968686 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.968871 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.969450 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.969546 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:40 crc kubenswrapper[4851]: I0223 13:08:40.969613 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:40 crc kubenswrapper[4851]: E0223 13:08:40.969693 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.007833 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.007882 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.007890 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.007903 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.007913 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.110662 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.110698 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.110708 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.110726 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.110743 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.213421 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.213461 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.213473 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.213490 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.213503 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.315162 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.315202 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.315210 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.315225 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.315234 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.417939 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.417970 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.417977 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.417990 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.418000 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.520500 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.520543 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.520551 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.520565 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.520575 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.622557 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.622596 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.622607 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.622622 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.622633 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.725174 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.725219 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.725228 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.725245 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.725256 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.827885 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.827941 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.827953 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.827969 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.827981 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.929969 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.930014 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.930030 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.930044 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.930054 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:41Z","lastTransitionTime":"2026-02-23T13:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:41 crc kubenswrapper[4851]: I0223 13:08:41.957228 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:36:36.423345686 +0000 UTC Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.035153 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.035248 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.035266 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.035287 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.035306 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.138032 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.138081 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.138092 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.138108 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.138119 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.241016 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.241059 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.241071 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.241089 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.241101 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.343787 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.343831 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.343842 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.343858 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.343869 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.446580 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.446622 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.446637 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.446659 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.446674 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.549163 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.549203 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.549212 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.549227 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.549238 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.631917 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.632022 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.632085 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.632142 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.632216 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.632231 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:46.632211348 +0000 UTC m=+81.313915026 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.632275 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:46.632258819 +0000 UTC m=+81.313962577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.633137 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:08:46.633124695 +0000 UTC m=+81.314828373 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.652494 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.652546 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.652558 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.652574 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.652586 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.733581 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.733686 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.733779 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.733798 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.733821 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.733833 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.733836 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.733842 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.733896 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:46.733881007 +0000 UTC m=+81.415584685 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.733910 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:46.733904998 +0000 UTC m=+81.415608676 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.755403 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.755458 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.755471 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.755488 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.755502 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.858236 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.858290 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.858301 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.858322 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.858376 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.958137 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:04:06.402950321 +0000 UTC Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.961981 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.962021 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.962034 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.962054 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.962065 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:42Z","lastTransitionTime":"2026-02-23T13:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.968593 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.968655 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:42 crc kubenswrapper[4851]: I0223 13:08:42.968610 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.968810 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.968891 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:42 crc kubenswrapper[4851]: E0223 13:08:42.969054 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.064597 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.064632 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.064642 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.064656 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.064668 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.166560 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.166599 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.166615 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.166633 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.166644 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.268925 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.268984 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.268993 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.269006 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.269016 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.370867 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.370911 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.370922 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.370937 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.370948 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.473360 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.473407 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.473423 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.473439 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.473452 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.575992 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.576036 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.576045 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.576062 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.576070 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.678482 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.678524 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.678532 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.678547 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.678564 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.781169 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.781209 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.781219 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.781235 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.781245 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.883905 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.883959 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.883975 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.884001 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.884024 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.958741 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:18:37.289295033 +0000 UTC Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.986843 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.986883 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.986895 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.986909 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:43 crc kubenswrapper[4851]: I0223 13:08:43.986921 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:43Z","lastTransitionTime":"2026-02-23T13:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.090299 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.090381 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.090401 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.090425 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.090443 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:44Z","lastTransitionTime":"2026-02-23T13:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.193452 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.193477 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.193485 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.193498 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.193506 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:44Z","lastTransitionTime":"2026-02-23T13:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.296382 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.296435 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.296450 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.296466 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.296477 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:44Z","lastTransitionTime":"2026-02-23T13:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.399472 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.399521 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.399531 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.399548 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.399559 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:44Z","lastTransitionTime":"2026-02-23T13:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.502721 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.502766 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.502775 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.502795 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.502807 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:44Z","lastTransitionTime":"2026-02-23T13:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.605214 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.605280 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.605298 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.605320 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.605357 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:44Z","lastTransitionTime":"2026-02-23T13:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.708049 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.708099 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.708111 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.708131 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.708143 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:44Z","lastTransitionTime":"2026-02-23T13:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.810101 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.810157 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.810165 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.810180 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.810190 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:44Z","lastTransitionTime":"2026-02-23T13:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.912301 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.912379 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.912391 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.912407 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.912420 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:44Z","lastTransitionTime":"2026-02-23T13:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.959776 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:58:25.375901858 +0000 UTC Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.968243 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.968306 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:44 crc kubenswrapper[4851]: E0223 13:08:44.968374 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:44 crc kubenswrapper[4851]: E0223 13:08:44.968494 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:44 crc kubenswrapper[4851]: I0223 13:08:44.968243 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:44 crc kubenswrapper[4851]: E0223 13:08:44.968599 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.014591 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.014633 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.014644 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.014658 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.014668 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.117134 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.117200 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.117216 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.117264 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.117285 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.220271 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.220323 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.220352 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.220367 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.220378 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.322817 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.322859 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.322871 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.322886 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.322894 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.424858 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.424894 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.424902 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.424915 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.424923 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.527196 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.527247 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.527256 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.527272 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.527281 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.629607 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.629646 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.629655 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.629668 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.629677 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.732617 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.732676 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.732688 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.732706 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.732720 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.781729 4851 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.835645 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.835683 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.835695 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.835711 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.835722 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.879209 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.879315 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.879366 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.879384 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.879396 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: E0223 13:08:45.891154 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.900395 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.900430 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.900439 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.900452 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.900461 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: E0223 13:08:45.910640 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.914057 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.914095 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.914104 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.914117 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.914127 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: E0223 13:08:45.924940 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.928770 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.928799 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.928806 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.928820 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.928829 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: E0223 13:08:45.939346 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.942916 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.942974 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.942991 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.943010 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.943023 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: E0223 13:08:45.954205 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:45 crc kubenswrapper[4851]: E0223 13:08:45.954316 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.956072 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.956100 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.956108 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.956125 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.956136 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:45Z","lastTransitionTime":"2026-02-23T13:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.960649 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:53:50.974820469 +0000 UTC Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.978130 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.987638 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:45 crc kubenswrapper[4851]: I0223 13:08:45.996059 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.006957 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.018007 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.028848 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.057987 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.058024 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.058034 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.058048 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.058057 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.160516 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.160561 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.160572 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.160587 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.160598 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.263130 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.263180 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.263198 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.263218 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.263230 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.365261 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.365302 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.365320 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.365346 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.365371 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.467770 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.467809 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.467818 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.467835 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.467844 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.570486 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.570519 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.570527 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.570539 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.570550 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.664615 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.664677 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.664745 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.664801 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:08:54.664775545 +0000 UTC m=+89.346479223 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.664834 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.664889 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:54.664872698 +0000 UTC m=+89.346576446 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.664943 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.665059 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:54.665033163 +0000 UTC m=+89.346736871 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.672984 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.673025 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.673035 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.673051 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.673061 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.742675 4851 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.765782 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.765843 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.765964 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.766022 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.766034 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.766085 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:54.766068924 +0000 UTC m=+89.447772602 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.765986 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.766121 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.766135 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.766178 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:08:54.766164716 +0000 UTC m=+89.447868394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.776276 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.776336 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.776347 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.776361 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.776370 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.878612 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.878649 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.878658 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.878671 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.878680 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.961489 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:13:40.73583129 +0000 UTC Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.967987 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.968216 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.968301 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.968224 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.968369 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.968441 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.978279 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.979187 4851 scope.go:117] "RemoveContainer" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" Feb 23 13:08:46 crc kubenswrapper[4851]: E0223 13:08:46.979561 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.980857 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.980899 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.980912 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.980929 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:46 crc kubenswrapper[4851]: I0223 13:08:46.980942 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:46Z","lastTransitionTime":"2026-02-23T13:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.083357 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.083403 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.083412 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.083426 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.083439 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:47Z","lastTransitionTime":"2026-02-23T13:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.185741 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.185775 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.185786 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.185800 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.185809 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:47Z","lastTransitionTime":"2026-02-23T13:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.282649 4851 scope.go:117] "RemoveContainer" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" Feb 23 13:08:47 crc kubenswrapper[4851]: E0223 13:08:47.282799 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.287973 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.288018 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.288028 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.288044 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.288055 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:47Z","lastTransitionTime":"2026-02-23T13:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.390302 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.390358 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.390370 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.390385 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.390394 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:47Z","lastTransitionTime":"2026-02-23T13:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.497837 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.497878 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.497887 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.497902 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.497915 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:47Z","lastTransitionTime":"2026-02-23T13:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.600918 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.600956 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.600965 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.600983 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.600997 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:47Z","lastTransitionTime":"2026-02-23T13:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.703289 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.703355 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.703372 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.703395 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.703407 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:47Z","lastTransitionTime":"2026-02-23T13:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.806085 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.806129 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.806137 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.806170 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.806179 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:47Z","lastTransitionTime":"2026-02-23T13:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.908971 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.909013 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.909022 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.909041 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.909050 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:47Z","lastTransitionTime":"2026-02-23T13:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:47 crc kubenswrapper[4851]: I0223 13:08:47.962222 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:52:28.124473168 +0000 UTC Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.011397 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.011439 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.011448 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.011462 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.011473 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.114063 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.114110 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.114120 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.114135 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.114148 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.216189 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.216222 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.216230 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.216242 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.216252 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.318528 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.318572 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.318588 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.318602 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.318613 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.420649 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.420682 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.420690 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.420704 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.420714 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.523051 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.523096 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.523105 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.523123 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.523136 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.626627 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.626681 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.626693 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.626716 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.626728 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.732010 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.732058 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.732070 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.732123 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.732141 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.834840 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.834916 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.834933 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.834952 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.834965 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.937885 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.937955 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.937965 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.937998 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.938011 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:48Z","lastTransitionTime":"2026-02-23T13:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.962421 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:35:34.597921524 +0000 UTC Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.967659 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.967712 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:48 crc kubenswrapper[4851]: I0223 13:08:48.967774 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:48 crc kubenswrapper[4851]: E0223 13:08:48.967873 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:48 crc kubenswrapper[4851]: E0223 13:08:48.968058 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:48 crc kubenswrapper[4851]: E0223 13:08:48.968135 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.041181 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.041240 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.041252 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.041273 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.041290 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.144572 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.144627 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.144639 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.144663 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.144681 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.247509 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.247551 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.247559 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.247574 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.247586 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.350695 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.350837 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.350853 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.350867 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.350878 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.454137 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.454194 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.454203 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.454217 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.454226 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.556959 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.556999 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.557010 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.557026 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.557036 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.659921 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.659962 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.659973 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.659986 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.659994 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.762204 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.762261 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.762276 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.762297 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.762316 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.864243 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.864286 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.864297 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.864313 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.864352 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.962554 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:04:48.44525024 +0000 UTC Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.966506 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.966555 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.966571 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.966593 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:49 crc kubenswrapper[4851]: I0223 13:08:49.966611 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:49Z","lastTransitionTime":"2026-02-23T13:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.069014 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.069074 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.069091 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.069107 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.069118 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.171713 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.171783 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.171813 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.171830 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.171841 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.275196 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.275382 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.275410 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.275445 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.275467 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.378299 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.378366 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.378387 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.378407 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.378419 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.481865 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.481909 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.481919 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.481934 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.481947 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.584765 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.584846 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.584876 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.584906 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.584930 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.687457 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.687510 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.687522 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.687540 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.687559 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.790561 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.790661 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.790688 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.790729 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.790760 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.893764 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.893808 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.893819 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.893833 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.893842 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.963757 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:59:10.537936443 +0000 UTC Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.968072 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.968082 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:50 crc kubenswrapper[4851]: E0223 13:08:50.968311 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.968301 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:50 crc kubenswrapper[4851]: E0223 13:08:50.968194 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:50 crc kubenswrapper[4851]: E0223 13:08:50.968651 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.997495 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.997562 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.997578 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.997603 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:50 crc kubenswrapper[4851]: I0223 13:08:50.997620 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:50Z","lastTransitionTime":"2026-02-23T13:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.101130 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.101169 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.101179 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.101194 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.101202 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:51Z","lastTransitionTime":"2026-02-23T13:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.204909 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.204948 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.204958 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.204973 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.204982 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:51Z","lastTransitionTime":"2026-02-23T13:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.307728 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.307800 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.307812 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.307830 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.307848 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:51Z","lastTransitionTime":"2026-02-23T13:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.411973 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.412033 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.412042 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.412057 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.412067 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:51Z","lastTransitionTime":"2026-02-23T13:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.514696 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.514734 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.514745 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.514760 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.514772 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:51Z","lastTransitionTime":"2026-02-23T13:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.617044 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.617087 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.617097 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.617114 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.617128 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:51Z","lastTransitionTime":"2026-02-23T13:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.719381 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.719460 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.719468 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.719485 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.719494 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:51Z","lastTransitionTime":"2026-02-23T13:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.821926 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.821958 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.821967 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.821980 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.821989 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:51Z","lastTransitionTime":"2026-02-23T13:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.923774 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.923813 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.923822 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.923836 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.923846 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:51Z","lastTransitionTime":"2026-02-23T13:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:51 crc kubenswrapper[4851]: I0223 13:08:51.964817 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:27:03.414367203 +0000 UTC Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.026647 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.026691 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.026700 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.026714 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.026725 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.129606 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.129631 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.129639 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.129666 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.129675 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.233125 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.233180 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.233192 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.233207 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.233218 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.340063 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.340102 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.340112 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.340127 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.340147 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.442527 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.442610 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.442625 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.442644 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.442657 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.544711 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.544746 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.544755 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.544769 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.544780 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.647492 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.647554 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.647570 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.647594 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.647610 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.751626 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.751702 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.751720 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.751747 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.751768 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.854241 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.854286 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.854294 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.854309 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.854317 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.957175 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.957205 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.957214 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.957227 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.957235 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:52Z","lastTransitionTime":"2026-02-23T13:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.965808 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 13:15:00.382393252 +0000 UTC Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.968259 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.968297 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:52 crc kubenswrapper[4851]: I0223 13:08:52.968504 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:52 crc kubenswrapper[4851]: E0223 13:08:52.968428 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:52 crc kubenswrapper[4851]: E0223 13:08:52.968810 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:52 crc kubenswrapper[4851]: E0223 13:08:52.968873 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.060786 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.060843 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.060853 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.060866 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.060876 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.164635 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.164723 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.164754 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.164808 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.164832 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.267099 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.267136 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.267150 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.267169 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.267183 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.370687 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.370766 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.370786 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.370816 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.370833 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.473834 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.473923 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.473948 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.473984 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.474031 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.576916 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.576989 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.577005 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.577027 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.577042 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.679369 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.679439 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.679454 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.679479 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.679494 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.781731 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.781773 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.781782 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.781797 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.781807 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.885169 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.885226 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.885243 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.885263 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.885273 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.966753 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:28:27.075254943 +0000 UTC Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.988230 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.988273 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.988285 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.988303 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:53 crc kubenswrapper[4851]: I0223 13:08:53.988315 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:53Z","lastTransitionTime":"2026-02-23T13:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.090130 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.090156 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.090164 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.090177 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.090186 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:54Z","lastTransitionTime":"2026-02-23T13:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.192619 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.192655 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.192669 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.192692 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.192726 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:54Z","lastTransitionTime":"2026-02-23T13:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.295723 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.296393 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.296441 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.296783 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.296970 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:54Z","lastTransitionTime":"2026-02-23T13:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.299881 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.318567 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.332401 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.343607 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.355287 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.370652 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.384084 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.394505 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.399074 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.399106 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.399114 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.399128 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.399138 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:54Z","lastTransitionTime":"2026-02-23T13:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.501595 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.501637 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.501645 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.501658 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.501667 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:54Z","lastTransitionTime":"2026-02-23T13:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.604780 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.605137 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.605831 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.606018 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.606160 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:54Z","lastTransitionTime":"2026-02-23T13:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.709047 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.709300 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.709421 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.709579 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.709676 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:54Z","lastTransitionTime":"2026-02-23T13:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.753781 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.754001 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:09:10.753972457 +0000 UTC m=+105.435676135 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.754263 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.754430 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.754452 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.754481 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.754729 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:10.754707199 +0000 UTC m=+105.436410877 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.754893 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:10.754846303 +0000 UTC m=+105.436549981 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.812464 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.812514 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.812533 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.812558 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.812576 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:54Z","lastTransitionTime":"2026-02-23T13:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.856847 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.857394 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.857150 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.857995 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.858119 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.858360 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:10.858277965 +0000 UTC m=+105.539981643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.857765 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.859607 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.859629 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.859718 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:10.859692817 +0000 UTC m=+105.541396505 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.915479 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.915543 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.915554 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.915573 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.915583 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:54Z","lastTransitionTime":"2026-02-23T13:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.967451 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:42:19.499828848 +0000 UTC Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.968274 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.968317 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:54 crc kubenswrapper[4851]: I0223 13:08:54.968295 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.968411 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.968486 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:54 crc kubenswrapper[4851]: E0223 13:08:54.968603 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.022006 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.022054 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.022062 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.022078 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.022088 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.124843 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.124883 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.124891 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.124905 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.124916 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.228071 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.228131 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.228144 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.228171 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.228189 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.307819 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.308007 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.310385 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.322630 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.333772 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.333848 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.333869 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.333899 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.333921 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.338157 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.353258 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.367306 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.382596 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.395827 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.405223 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.417285 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.428988 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.436869 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.436909 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.436920 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.436936 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.436948 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.445377 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.456459 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.467748 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.481617 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.493341 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.539315 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.539381 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.539389 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.539404 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.539416 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.642549 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.642596 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.642609 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.642625 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.642637 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.745898 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.745949 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.745961 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.745976 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.745988 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.848417 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.848475 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.848494 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.848518 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.848535 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.951194 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.951256 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.951273 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.951295 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.951311 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:55Z","lastTransitionTime":"2026-02-23T13:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:55 crc kubenswrapper[4851]: I0223 13:08:55.968635 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:43:50.743273825 +0000 UTC Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.000176 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:55Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.026712 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.042281 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.052899 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.052925 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.052934 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.052949 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.052959 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.057501 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.057561 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.057583 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.057614 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.057633 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.061006 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: E0223 13:08:56.070227 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.073424 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.073472 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.073487 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.073505 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.073518 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.089092 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: E0223 13:08:56.091902 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.097257 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.097426 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.097515 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.097612 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.097695 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.102799 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: E0223 13:08:56.108666 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.111965 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.111992 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.112002 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.112020 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.112032 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.115291 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: E0223 13:08:56.126577 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.130107 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.130138 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.130149 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.130163 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.130194 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: E0223 13:08:56.144417 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:08:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:08:56 crc kubenswrapper[4851]: E0223 13:08:56.144536 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.155324 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.155368 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.155375 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.155390 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.155400 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.258291 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.258320 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.258344 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.258357 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.258405 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.361502 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.361543 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.361553 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.361569 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.361581 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.464810 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.464873 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.464885 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.464909 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.464923 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.567365 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.567417 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.567435 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.567453 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.567500 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.670276 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.670315 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.670323 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.670349 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.670373 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.772510 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.772543 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.772551 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.772565 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.772574 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.875460 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.875545 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.875575 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.875607 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.875635 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.968207 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:56 crc kubenswrapper[4851]: E0223 13:08:56.968740 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.968420 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:56 crc kubenswrapper[4851]: E0223 13:08:56.969209 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.968819 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:49:49.083739029 +0000 UTC Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.968284 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:56 crc kubenswrapper[4851]: E0223 13:08:56.969482 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.977475 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.977555 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.977573 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.977596 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:56 crc kubenswrapper[4851]: I0223 13:08:56.977614 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:56Z","lastTransitionTime":"2026-02-23T13:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.079417 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.079737 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.079859 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.079963 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.080046 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:57Z","lastTransitionTime":"2026-02-23T13:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.182716 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.182763 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.182775 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.182791 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.182803 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:57Z","lastTransitionTime":"2026-02-23T13:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.285402 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.285723 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.285863 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.285979 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.286071 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:57Z","lastTransitionTime":"2026-02-23T13:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.388046 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.388105 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.388115 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.388131 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.388141 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:57Z","lastTransitionTime":"2026-02-23T13:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.490443 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.490485 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.490495 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.490511 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.490525 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:57Z","lastTransitionTime":"2026-02-23T13:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.592714 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.592751 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.592761 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.592776 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.592787 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:57Z","lastTransitionTime":"2026-02-23T13:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.695083 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.695152 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.695170 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.695198 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.695216 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:57Z","lastTransitionTime":"2026-02-23T13:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.796960 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.797016 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.797032 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.797057 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.797073 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:57Z","lastTransitionTime":"2026-02-23T13:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.899218 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.899272 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.899285 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.899302 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.899315 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:57Z","lastTransitionTime":"2026-02-23T13:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:57 crc kubenswrapper[4851]: I0223 13:08:57.969519 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:49:49.144329044 +0000 UTC Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.002860 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.002950 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.002971 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.002993 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.003010 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.105162 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.105208 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.105226 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.105246 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.105262 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.207448 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.207490 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.207501 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.207517 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.207528 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.310073 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.310115 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.310126 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.310143 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.310157 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.412828 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.412875 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.412889 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.412914 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.412930 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.516164 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.516260 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.516280 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.516308 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.516325 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.618506 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.618554 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.618564 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.618580 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.618592 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.721190 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.721288 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.721312 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.721386 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.721407 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.823411 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.823438 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.823446 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.823459 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.823469 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.925733 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.925777 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.925788 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.925803 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.925812 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:58Z","lastTransitionTime":"2026-02-23T13:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.968528 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:08:58 crc kubenswrapper[4851]: E0223 13:08:58.968654 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.968947 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:08:58 crc kubenswrapper[4851]: E0223 13:08:58.969001 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.969037 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:08:58 crc kubenswrapper[4851]: E0223 13:08:58.969075 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:08:58 crc kubenswrapper[4851]: I0223 13:08:58.970400 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:53:14.654525782 +0000 UTC Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.027603 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.027961 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.028053 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.028124 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.028206 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.130142 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.130185 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.130199 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.130214 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.130228 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.233122 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.233218 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.233241 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.233273 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.233297 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.336169 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.336210 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.336221 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.336237 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.336250 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.438285 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.438459 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.438475 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.438489 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.438503 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.540579 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.540616 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.540627 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.540642 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.540654 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.642996 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.643036 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.643046 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.643061 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.643074 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.745522 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.745624 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.745636 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.745648 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.745657 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.848050 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.848158 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.848181 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.848228 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.848255 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.950261 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.950306 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.950319 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.950353 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.950364 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:08:59Z","lastTransitionTime":"2026-02-23T13:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.970611 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:50:59.639647934 +0000 UTC Feb 23 13:08:59 crc kubenswrapper[4851]: I0223 13:08:59.984896 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.053449 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.053486 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.053497 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.053512 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.053526 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.156770 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.156839 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.156851 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.156876 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.156888 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.260467 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.260524 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.260535 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.260556 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.260567 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.364386 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.364469 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.364487 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.364520 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.364544 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.467705 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.467764 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.467786 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.467812 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.467823 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.575280 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.575348 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.575362 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.575382 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.575395 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.678259 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.678382 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.678409 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.678441 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.678461 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.781722 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.781784 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.781801 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.781826 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.781842 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.884572 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.884635 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.884653 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.884676 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.884695 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.968545 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.968564 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:00 crc kubenswrapper[4851]: E0223 13:09:00.968752 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.968564 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:00 crc kubenswrapper[4851]: E0223 13:09:00.968918 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:00 crc kubenswrapper[4851]: E0223 13:09:00.969093 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.971696 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:04:12.547553659 +0000 UTC Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.986756 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.986815 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.986827 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.986845 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:00 crc kubenswrapper[4851]: I0223 13:09:00.986858 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:00Z","lastTransitionTime":"2026-02-23T13:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.089845 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.089900 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.089910 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.089928 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.089939 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:01Z","lastTransitionTime":"2026-02-23T13:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.192512 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.192574 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.192591 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.192608 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.192620 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:01Z","lastTransitionTime":"2026-02-23T13:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.294913 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.294980 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.294997 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.295019 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.295036 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:01Z","lastTransitionTime":"2026-02-23T13:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.397447 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.397484 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.397493 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.397509 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.397519 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:01Z","lastTransitionTime":"2026-02-23T13:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.499530 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.499570 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.499580 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.499601 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.499612 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:01Z","lastTransitionTime":"2026-02-23T13:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.602825 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.602871 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.602880 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.602894 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.602903 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:01Z","lastTransitionTime":"2026-02-23T13:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.705083 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.705115 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.705124 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.705135 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.705143 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:01Z","lastTransitionTime":"2026-02-23T13:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.807207 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.807266 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.807280 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.807297 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.807308 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:01Z","lastTransitionTime":"2026-02-23T13:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.909817 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.909849 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.909857 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.909869 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.909877 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:01Z","lastTransitionTime":"2026-02-23T13:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.968744 4851 scope.go:117] "RemoveContainer" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" Feb 23 13:09:01 crc kubenswrapper[4851]: I0223 13:09:01.972124 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:41:01.546712135 +0000 UTC Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.011948 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.011986 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.011998 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.012013 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.012025 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.114471 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.114499 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.114507 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.114521 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.114532 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.216999 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.217071 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.217080 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.217092 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.217100 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.319884 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.319949 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.319958 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.319971 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.320012 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.329776 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.331089 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.331528 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.345477 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:02Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.355841 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:02Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.365913 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:02Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.378839 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:02Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.392402 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:02Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.413381 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:02Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.422458 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.422509 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.422520 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.422534 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.422565 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.427199 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:02Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.438613 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:02Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.525493 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.525550 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.525562 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.525580 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.525630 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.627700 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.627732 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.627740 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.627752 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.627761 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.730592 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.730658 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.730668 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.730685 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.730697 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.833267 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.833305 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.833318 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.833359 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.833373 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.936202 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.936244 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.936262 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.936275 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.936284 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:02Z","lastTransitionTime":"2026-02-23T13:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.967759 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.967796 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.967803 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:02 crc kubenswrapper[4851]: E0223 13:09:02.967927 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:02 crc kubenswrapper[4851]: E0223 13:09:02.968103 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:02 crc kubenswrapper[4851]: E0223 13:09:02.968314 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:02 crc kubenswrapper[4851]: I0223 13:09:02.972533 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 22:27:24.029767178 +0000 UTC Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.039032 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.039069 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.039079 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.039091 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.039100 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.141792 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.141829 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.141839 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.141854 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.141864 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.243977 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.244010 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.244018 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.244033 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.244045 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.346376 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.346406 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.346414 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.346429 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.346438 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.448967 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.449020 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.449029 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.449045 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.449055 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.551999 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.552050 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.552061 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.552076 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.552105 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.654792 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.654833 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.654844 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.654859 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.654869 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.757006 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.757043 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.757051 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.757064 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.757074 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.860410 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.860447 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.860459 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.860472 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.860483 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.962411 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.962667 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.962729 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.962826 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.962895 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:03Z","lastTransitionTime":"2026-02-23T13:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:03 crc kubenswrapper[4851]: I0223 13:09:03.973409 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:10:14.499425225 +0000 UTC Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.064983 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.065227 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.065308 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.065423 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.065550 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.168292 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.168353 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.168366 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.168382 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.168393 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.270024 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.270058 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.270068 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.270084 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.270097 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.372313 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.372403 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.372412 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.372427 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.372440 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.474843 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.474878 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.474886 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.474901 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.474910 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.577017 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.577046 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.577054 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.577065 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.577073 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.679164 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.679238 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.679248 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.679265 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.679279 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.781853 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.781907 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.781919 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.781937 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.781955 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.883683 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.883756 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.883767 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.883789 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.883801 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.967638 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.967754 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:04 crc kubenswrapper[4851]: E0223 13:09:04.967814 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.967880 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:04 crc kubenswrapper[4851]: E0223 13:09:04.967915 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:04 crc kubenswrapper[4851]: E0223 13:09:04.968032 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.973611 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:16:13.832142386 +0000 UTC Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.986072 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.986115 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.986124 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.986140 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:04 crc kubenswrapper[4851]: I0223 13:09:04.986150 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:04Z","lastTransitionTime":"2026-02-23T13:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.088854 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.088891 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.088899 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.088912 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.088922 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:05Z","lastTransitionTime":"2026-02-23T13:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.191933 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.191972 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.191979 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.191993 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.192002 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:05Z","lastTransitionTime":"2026-02-23T13:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.294149 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.294196 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.294207 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.294220 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.294229 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:05Z","lastTransitionTime":"2026-02-23T13:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.396225 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.396289 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.396305 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.396355 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.396371 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:05Z","lastTransitionTime":"2026-02-23T13:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.498298 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.498350 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.498367 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.498384 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.498396 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:05Z","lastTransitionTime":"2026-02-23T13:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.601434 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.601478 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.601490 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.601509 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.601521 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:05Z","lastTransitionTime":"2026-02-23T13:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.704271 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.704313 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.704322 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.704555 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.704576 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:05Z","lastTransitionTime":"2026-02-23T13:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.807532 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.807596 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.807610 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.807631 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.807647 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:05Z","lastTransitionTime":"2026-02-23T13:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.909862 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.909903 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.909913 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.909927 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.909936 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:05Z","lastTransitionTime":"2026-02-23T13:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.973707 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:33:13.138632548 +0000 UTC Feb 23 13:09:05 crc kubenswrapper[4851]: I0223 13:09:05.986753 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.006534 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.011753 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.011774 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.011781 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.011793 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.011805 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.021288 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.032480 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.043680 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.056686 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.082938 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.094497 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.113708 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.113735 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.113745 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.113759 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.113768 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.216375 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.216437 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.216448 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.216463 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.216472 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.318459 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.318498 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.318507 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.318523 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.318534 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.422610 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.422704 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.422712 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.422726 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.422736 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.525121 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.525187 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.525200 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.525213 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.525224 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.536638 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.536678 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.536689 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.536703 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.536715 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: E0223 13:09:06.548952 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.552842 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.552902 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.552930 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.552952 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.552971 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: E0223 13:09:06.571109 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.575266 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.575300 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.575310 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.575343 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.575353 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: E0223 13:09:06.587585 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.591617 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.591685 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.591697 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.591716 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.591756 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: E0223 13:09:06.607490 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.611813 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.611860 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.611876 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.611899 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.611948 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: E0223 13:09:06.624555 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:06 crc kubenswrapper[4851]: E0223 13:09:06.624784 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.627872 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.627915 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.627928 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.627947 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.627960 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.730304 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.730386 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.730398 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.730417 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.730431 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.833901 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.833954 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.833964 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.833981 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.833993 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.937422 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.937475 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.937491 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.937510 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.937524 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:06Z","lastTransitionTime":"2026-02-23T13:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.967928 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.968033 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.968055 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:06 crc kubenswrapper[4851]: E0223 13:09:06.968161 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:06 crc kubenswrapper[4851]: E0223 13:09:06.968272 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:06 crc kubenswrapper[4851]: E0223 13:09:06.968426 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:06 crc kubenswrapper[4851]: I0223 13:09:06.974461 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:25:02.240561234 +0000 UTC Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.040479 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.040521 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.040532 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.040546 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.040557 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.143961 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.144048 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.144065 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.144121 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.144138 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.246268 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.246374 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.246384 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.246398 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.246407 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.348373 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.348440 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.348461 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.348488 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.348508 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.451126 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.451188 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.451200 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.451215 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.451228 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.553808 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.553863 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.553875 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.553890 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.553902 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.656555 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.656601 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.656613 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.656631 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.656643 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.758558 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.758609 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.758621 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.758638 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.758651 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.861939 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.862005 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.862021 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.862044 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.862062 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.964430 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.964499 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.964525 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.964592 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.964614 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:07Z","lastTransitionTime":"2026-02-23T13:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:07 crc kubenswrapper[4851]: I0223 13:09:07.975499 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:56:48.400341753 +0000 UTC Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.067059 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.067087 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.067096 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.067107 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.067116 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.170060 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.170102 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.170112 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.170129 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.170140 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.272451 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.272486 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.272494 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.272509 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.272518 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.374174 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.374208 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.374217 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.374229 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.374237 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.477647 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.477687 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.477696 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.477710 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.477719 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.579836 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.579875 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.579883 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.579895 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.579908 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.683156 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.683200 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.683210 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.683225 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.683239 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.785282 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.785392 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.785406 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.785425 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.785437 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.888236 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.888344 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.888356 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.888374 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.888382 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.968016 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.968076 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.968147 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:08 crc kubenswrapper[4851]: E0223 13:09:08.968150 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:08 crc kubenswrapper[4851]: E0223 13:09:08.968210 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:08 crc kubenswrapper[4851]: E0223 13:09:08.968281 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.976215 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 08:12:10.727679826 +0000 UTC Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.989912 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.989948 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.989956 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.989969 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:08 crc kubenswrapper[4851]: I0223 13:09:08.989978 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:08Z","lastTransitionTime":"2026-02-23T13:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.092701 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.092740 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.092756 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.092772 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.092784 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:09Z","lastTransitionTime":"2026-02-23T13:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.195015 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.195046 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.195054 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.195067 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.195077 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:09Z","lastTransitionTime":"2026-02-23T13:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.298051 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.298110 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.298126 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.298149 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.298167 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:09Z","lastTransitionTime":"2026-02-23T13:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.401046 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.401086 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.401094 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.401107 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.401116 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:09Z","lastTransitionTime":"2026-02-23T13:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.503504 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.503531 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.503538 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.503551 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.503558 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:09Z","lastTransitionTime":"2026-02-23T13:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.605803 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.605844 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.605854 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.605868 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.605880 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:09Z","lastTransitionTime":"2026-02-23T13:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.708877 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.708921 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.708933 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.708949 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.708960 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:09Z","lastTransitionTime":"2026-02-23T13:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.813774 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.813816 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.813825 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.813868 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.813879 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:09Z","lastTransitionTime":"2026-02-23T13:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.916357 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.916404 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.916412 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.916428 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.916438 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:09Z","lastTransitionTime":"2026-02-23T13:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:09 crc kubenswrapper[4851]: I0223 13:09:09.976712 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:39:02.973343335 +0000 UTC Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.018801 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.018846 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.018857 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.018875 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.018889 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.120992 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.121036 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.121044 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.121059 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.121069 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.223000 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.223041 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.223050 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.223066 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.223126 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.325123 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.325171 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.325184 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.325202 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.325214 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.427724 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.427766 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.427775 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.427792 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.427802 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.529883 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.530501 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.530522 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.530537 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.530547 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.632758 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.632798 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.632807 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.632822 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.632831 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.735712 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.735754 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.735767 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.735784 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.735914 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.799910 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.800010 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.800076 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.800091 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:09:42.80005806 +0000 UTC m=+137.481761748 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.800133 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:42.800122802 +0000 UTC m=+137.481826500 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.800237 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.800399 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.800452 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:42.800442512 +0000 UTC m=+137.482146200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.837585 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.837625 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.837638 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.837655 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.837673 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.901479 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.901545 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.901622 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.901624 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.901671 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.901687 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.901637 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.901738 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:42.901722499 +0000 UTC m=+137.583426197 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.901765 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.901862 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:09:42.901838283 +0000 UTC m=+137.583542011 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.941997 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.942042 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.942055 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.942072 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.942088 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:10Z","lastTransitionTime":"2026-02-23T13:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.968639 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.968659 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.968747 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.968832 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.968639 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:10 crc kubenswrapper[4851]: E0223 13:09:10.968901 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:10 crc kubenswrapper[4851]: I0223 13:09:10.977471 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:09:06.849594314 +0000 UTC Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.044260 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.044301 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.044309 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.044343 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.044356 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.145961 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.146007 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.146020 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.146038 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.146049 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.218889 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-snjvm"] Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.219321 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-snjvm" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.221100 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.221404 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.221548 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.231969 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.244144 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.250050 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.250099 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.250114 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.250131 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.250142 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.259482 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.273853 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.284818 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.300900 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.305176 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/584d5dd7-e0f3-4695-bf51-22e1b643db23-hosts-file\") pod \"node-resolver-snjvm\" (UID: \"584d5dd7-e0f3-4695-bf51-22e1b643db23\") " pod="openshift-dns/node-resolver-snjvm" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.305205 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpxbk\" (UniqueName: \"kubernetes.io/projected/584d5dd7-e0f3-4695-bf51-22e1b643db23-kube-api-access-qpxbk\") pod \"node-resolver-snjvm\" (UID: \"584d5dd7-e0f3-4695-bf51-22e1b643db23\") " pod="openshift-dns/node-resolver-snjvm" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.312020 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.334579 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.344678 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.352618 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.352666 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.352682 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.352698 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.352711 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.406285 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpxbk\" (UniqueName: \"kubernetes.io/projected/584d5dd7-e0f3-4695-bf51-22e1b643db23-kube-api-access-qpxbk\") pod \"node-resolver-snjvm\" (UID: \"584d5dd7-e0f3-4695-bf51-22e1b643db23\") " pod="openshift-dns/node-resolver-snjvm" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.406377 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/584d5dd7-e0f3-4695-bf51-22e1b643db23-hosts-file\") pod \"node-resolver-snjvm\" (UID: \"584d5dd7-e0f3-4695-bf51-22e1b643db23\") " pod="openshift-dns/node-resolver-snjvm" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.406446 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/584d5dd7-e0f3-4695-bf51-22e1b643db23-hosts-file\") pod \"node-resolver-snjvm\" (UID: \"584d5dd7-e0f3-4695-bf51-22e1b643db23\") " pod="openshift-dns/node-resolver-snjvm" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.427483 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpxbk\" (UniqueName: \"kubernetes.io/projected/584d5dd7-e0f3-4695-bf51-22e1b643db23-kube-api-access-qpxbk\") pod \"node-resolver-snjvm\" (UID: \"584d5dd7-e0f3-4695-bf51-22e1b643db23\") " pod="openshift-dns/node-resolver-snjvm" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.455159 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.455357 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.455443 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.455526 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.455622 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.530568 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-snjvm" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.557949 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.557998 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.558010 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.558027 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.558039 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.594384 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-t7cvl"] Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.594670 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-npswg"] Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.594870 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8sz99"] Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.595568 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.595570 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.595578 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.604630 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.604667 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.604791 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.604937 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.604995 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.605078 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.605122 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.605216 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.605301 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.606083 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.606231 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.606419 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.627319 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.641666 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.655234 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.659912 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.659941 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.659949 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.659964 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.659973 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.670350 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.689418 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.704082 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709315 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5a296ee-a904-4283-8849-65abb16717b4-mcd-auth-proxy-config\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709393 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709414 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-os-release\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709428 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-run-netns\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709441 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-etc-kubernetes\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709469 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-daemon-config\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709493 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-cni-dir\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709511 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-cnibin\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709538 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-cni-binary-copy\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709557 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-run-k8s-cni-cncf-io\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709577 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-var-lib-kubelet\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709594 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9svq\" (UniqueName: \"kubernetes.io/projected/c5a296ee-a904-4283-8849-65abb16717b4-kube-api-access-d9svq\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709612 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c5a296ee-a904-4283-8849-65abb16717b4-rootfs\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709632 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-os-release\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709650 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5bd228f9-317e-43a7-a9f8-473d69d93204-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709664 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-var-lib-cni-bin\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709677 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdj9\" (UniqueName: \"kubernetes.io/projected/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-kube-api-access-shdj9\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709691 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-var-lib-cni-multus\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709704 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-hostroot\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709717 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-run-multus-certs\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709731 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-cnibin\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709745 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-system-cni-dir\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709759 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5a296ee-a904-4283-8849-65abb16717b4-proxy-tls\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709781 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5bd228f9-317e-43a7-a9f8-473d69d93204-cni-binary-copy\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709797 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-socket-dir-parent\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709812 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-system-cni-dir\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709826 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlfxz\" (UniqueName: \"kubernetes.io/projected/5bd228f9-317e-43a7-a9f8-473d69d93204-kube-api-access-tlfxz\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.709841 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-conf-dir\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.716627 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.728376 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.740443 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.750215 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.762191 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.762232 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.762270 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.762282 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.762298 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.762310 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.775453 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.787571 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.802576 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.810949 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-system-cni-dir\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.810992 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlfxz\" (UniqueName: \"kubernetes.io/projected/5bd228f9-317e-43a7-a9f8-473d69d93204-kube-api-access-tlfxz\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811016 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-conf-dir\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811037 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5a296ee-a904-4283-8849-65abb16717b4-mcd-auth-proxy-config\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811058 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811080 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-os-release\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811100 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-run-netns\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811121 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-etc-kubernetes\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811157 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-daemon-config\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811180 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-cni-dir\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811201 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-cnibin\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811229 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-cni-binary-copy\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811251 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-run-k8s-cni-cncf-io\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811273 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-var-lib-kubelet\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811293 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9svq\" (UniqueName: \"kubernetes.io/projected/c5a296ee-a904-4283-8849-65abb16717b4-kube-api-access-d9svq\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811314 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c5a296ee-a904-4283-8849-65abb16717b4-rootfs\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811356 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-os-release\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811406 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5bd228f9-317e-43a7-a9f8-473d69d93204-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.811427 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-var-lib-cni-bin\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812031 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shdj9\" (UniqueName: \"kubernetes.io/projected/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-kube-api-access-shdj9\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812217 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-var-lib-cni-multus\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812249 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-hostroot\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812276 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-run-multus-certs\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812303 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-system-cni-dir\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812339 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-cnibin\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812360 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5a296ee-a904-4283-8849-65abb16717b4-proxy-tls\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812397 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5bd228f9-317e-43a7-a9f8-473d69d93204-cni-binary-copy\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812412 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-socket-dir-parent\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812555 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-socket-dir-parent\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812597 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-system-cni-dir\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.812882 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-conf-dir\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.813735 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5a296ee-a904-4283-8849-65abb16717b4-mcd-auth-proxy-config\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.813817 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c5a296ee-a904-4283-8849-65abb16717b4-rootfs\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814107 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-os-release\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814168 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-run-netns\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814198 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-etc-kubernetes\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814345 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814423 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-var-lib-cni-multus\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814491 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-os-release\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814532 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-system-cni-dir\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814599 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-hostroot\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814648 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-run-multus-certs\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814916 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-daemon-config\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.814989 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-run-k8s-cni-cncf-io\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.815021 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-var-lib-kubelet\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.815046 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-cni-binary-copy\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.815115 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5bd228f9-317e-43a7-a9f8-473d69d93204-cnibin\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.815126 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-multus-cni-dir\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.815139 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5bd228f9-317e-43a7-a9f8-473d69d93204-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.815174 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-cnibin\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.815279 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-host-var-lib-cni-bin\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.815711 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5bd228f9-317e-43a7-a9f8-473d69d93204-cni-binary-copy\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.818231 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.825009 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c5a296ee-a904-4283-8849-65abb16717b4-proxy-tls\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.829320 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlfxz\" (UniqueName: \"kubernetes.io/projected/5bd228f9-317e-43a7-a9f8-473d69d93204-kube-api-access-tlfxz\") pod \"multus-additional-cni-plugins-8sz99\" (UID: \"5bd228f9-317e-43a7-a9f8-473d69d93204\") " pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.831682 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9svq\" (UniqueName: \"kubernetes.io/projected/c5a296ee-a904-4283-8849-65abb16717b4-kube-api-access-d9svq\") pod \"machine-config-daemon-npswg\" (UID: \"c5a296ee-a904-4283-8849-65abb16717b4\") " pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.832589 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shdj9\" (UniqueName: \"kubernetes.io/projected/d14644c4-9d6f-4a06-bc4a-85795d4be4cd-kube-api-access-shdj9\") pod \"multus-t7cvl\" (UID: \"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\") " pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.832910 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.845271 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.853709 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.864689 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.864714 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.864722 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.864736 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.864746 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.868393 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.885592 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.897837 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.911347 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.915937 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8sz99" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.923864 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.923941 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.930281 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t7cvl" Feb 23 13:09:11 crc kubenswrapper[4851]: W0223 13:09:11.935339 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5a296ee_a904_4283_8849_65abb16717b4.slice/crio-078842ae6f06e30bb143e7cfcda9637a61f9ac76522b496a22a717c0de1d8de1 WatchSource:0}: Error finding container 078842ae6f06e30bb143e7cfcda9637a61f9ac76522b496a22a717c0de1d8de1: Status 404 returned error can't find the container with id 078842ae6f06e30bb143e7cfcda9637a61f9ac76522b496a22a717c0de1d8de1 Feb 23 13:09:11 crc kubenswrapper[4851]: W0223 13:09:11.952580 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd14644c4_9d6f_4a06_bc4a_85795d4be4cd.slice/crio-d4b923d318d8d61611d1705a157d4e1071ea79a124806482a9f21a86c75a00ff WatchSource:0}: Error finding container d4b923d318d8d61611d1705a157d4e1071ea79a124806482a9f21a86c75a00ff: Status 404 returned error can't find the container with id d4b923d318d8d61611d1705a157d4e1071ea79a124806482a9f21a86c75a00ff Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.968660 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.968710 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.968722 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.968738 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.968749 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:11Z","lastTransitionTime":"2026-02-23T13:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.975595 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n9df6"] Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.979191 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:40:15.326476365 +0000 UTC Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.979445 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.981121 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.981111 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.981766 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.982345 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.982900 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.983105 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:09:11 crc kubenswrapper[4851]: I0223 13:09:11.983256 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.001488 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:11Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.012261 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.022765 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.032409 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.043587 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.052983 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.065976 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.072035 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.072068 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.072076 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.072090 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.072101 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.079773 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.093620 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.105182 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.114762 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-etc-openvswitch\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.114817 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-node-log\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.114835 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzppz\" (UniqueName: \"kubernetes.io/projected/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-kube-api-access-wzppz\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.114855 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-ovn-kubernetes\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.114872 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovn-node-metrics-cert\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.114980 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-netns\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.114996 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-systemd\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115013 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-var-lib-openvswitch\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115026 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-ovn\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115051 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-log-socket\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115066 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-netd\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115083 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-config\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115169 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115215 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-kubelet\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115242 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-script-lib\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115316 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-bin\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115366 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-env-overrides\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115397 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-slash\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115412 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-openvswitch\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115428 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-systemd-units\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.115479 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.135476 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.148181 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.174017 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.174050 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.174059 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.174075 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.174085 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217667 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-systemd-units\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217724 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-etc-openvswitch\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217759 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-node-log\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217789 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzppz\" (UniqueName: \"kubernetes.io/projected/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-kube-api-access-wzppz\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217820 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-ovn-kubernetes\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217839 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovn-node-metrics-cert\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217866 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-netns\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217891 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-systemd\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217937 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-var-lib-openvswitch\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217963 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-ovn\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.217999 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-log-socket\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218020 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-netd\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218040 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-config\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218065 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218101 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-kubelet\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218129 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-script-lib\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218176 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-bin\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218204 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-env-overrides\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218241 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-slash\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218264 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-openvswitch\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218357 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-openvswitch\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218407 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-systemd-units\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218440 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-etc-openvswitch\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218470 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-node-log\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.218878 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-ovn-kubernetes\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.219229 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-netns\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.219347 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-systemd\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.219507 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-var-lib-openvswitch\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.219753 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.219836 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-bin\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.219770 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-kubelet\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.219974 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-netd\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.220030 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-log-socket\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.219986 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-slash\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.220069 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-ovn\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.220469 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-env-overrides\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.220771 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-script-lib\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.220940 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-config\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.224204 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovn-node-metrics-cert\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.234471 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzppz\" (UniqueName: \"kubernetes.io/projected/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-kube-api-access-wzppz\") pod \"ovnkube-node-n9df6\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.276827 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.276868 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.276880 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.276896 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.276907 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.356253 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.356295 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.356304 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"078842ae6f06e30bb143e7cfcda9637a61f9ac76522b496a22a717c0de1d8de1"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.357958 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-snjvm" event={"ID":"584d5dd7-e0f3-4695-bf51-22e1b643db23","Type":"ContainerStarted","Data":"f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.357994 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-snjvm" event={"ID":"584d5dd7-e0f3-4695-bf51-22e1b643db23","Type":"ContainerStarted","Data":"b3b7cfc81c859e2fb3c7f4ab819a1a27c86ea0e38d2d87441a25c54f7d501917"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.359778 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7cvl" event={"ID":"d14644c4-9d6f-4a06-bc4a-85795d4be4cd","Type":"ContainerStarted","Data":"61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.359803 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7cvl" event={"ID":"d14644c4-9d6f-4a06-bc4a-85795d4be4cd","Type":"ContainerStarted","Data":"d4b923d318d8d61611d1705a157d4e1071ea79a124806482a9f21a86c75a00ff"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.361291 4851 generic.go:334] "Generic (PLEG): container finished" podID="5bd228f9-317e-43a7-a9f8-473d69d93204" containerID="f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82" exitCode=0 Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.361318 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" event={"ID":"5bd228f9-317e-43a7-a9f8-473d69d93204","Type":"ContainerDied","Data":"f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.361344 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" event={"ID":"5bd228f9-317e-43a7-a9f8-473d69d93204","Type":"ContainerStarted","Data":"74f3c6ffe7f7800c2fc112a35af8317304fa60b4a534700f12cb9d05ba287c59"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.368940 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.379590 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.379621 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.379630 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.379679 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.379690 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.379831 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.391043 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.394075 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.411875 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: W0223 13:09:12.415575 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1929e0_6878_4572_b6d1_3a6dd8e2c291.slice/crio-becab532ab832248a48accec5d719c69386c3939692a2725a6c04baa51a1306c WatchSource:0}: Error finding container becab532ab832248a48accec5d719c69386c3939692a2725a6c04baa51a1306c: Status 404 returned error can't find the container with id becab532ab832248a48accec5d719c69386c3939692a2725a6c04baa51a1306c Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.427251 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.448851 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.465888 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.481789 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.481924 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.481950 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.481959 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.481972 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.481982 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.493658 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.506592 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.523854 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.563053 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.584558 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.588019 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.588051 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.588061 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.588084 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.588109 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.607180 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.626442 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.643284 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.657362 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.670376 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.682074 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.690110 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.690147 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.690156 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.690171 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.690180 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.693505 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.710810 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.723701 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.741317 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.754149 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.767487 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.779513 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:12Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.792402 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.792441 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.792451 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.792468 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.792479 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.894705 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.894745 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.894754 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.894768 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.894778 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.968195 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.968200 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.968370 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:12 crc kubenswrapper[4851]: E0223 13:09:12.968554 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:12 crc kubenswrapper[4851]: E0223 13:09:12.968636 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:12 crc kubenswrapper[4851]: E0223 13:09:12.968704 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.980177 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:24:58.985935684 +0000 UTC Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.997164 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.997209 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.997219 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.997235 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:12 crc kubenswrapper[4851]: I0223 13:09:12.997248 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:12Z","lastTransitionTime":"2026-02-23T13:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.100107 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.100139 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.100147 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.100161 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.100178 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:13Z","lastTransitionTime":"2026-02-23T13:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.203668 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.203724 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.203738 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.203758 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.203773 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:13Z","lastTransitionTime":"2026-02-23T13:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.307561 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.307606 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.307620 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.307638 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.307648 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:13Z","lastTransitionTime":"2026-02-23T13:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.367309 4851 generic.go:334] "Generic (PLEG): container finished" podID="5bd228f9-317e-43a7-a9f8-473d69d93204" containerID="9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48" exitCode=0 Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.367368 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" event={"ID":"5bd228f9-317e-43a7-a9f8-473d69d93204","Type":"ContainerDied","Data":"9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.371350 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1" exitCode=0 Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.371382 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.371420 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"becab532ab832248a48accec5d719c69386c3939692a2725a6c04baa51a1306c"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.383729 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.399244 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.414507 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.418707 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.418749 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.418758 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.418774 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.418783 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:13Z","lastTransitionTime":"2026-02-23T13:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.435628 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.457528 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.476855 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.492426 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.506344 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.521020 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.521059 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.521068 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.521084 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.521094 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:13Z","lastTransitionTime":"2026-02-23T13:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.531161 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.554353 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.571658 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.592228 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.613729 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.623400 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.623443 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.623460 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.623476 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.623489 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:13Z","lastTransitionTime":"2026-02-23T13:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.632974 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.646722 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.672143 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.693261 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.707201 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.724231 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.726466 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.726502 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.726510 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.726525 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.726534 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:13Z","lastTransitionTime":"2026-02-23T13:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.735966 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.753101 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.768259 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.781460 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.806661 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.820105 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.828397 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.828431 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.828441 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.828456 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.828468 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:13Z","lastTransitionTime":"2026-02-23T13:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.833701 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:13Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.930957 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.931005 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.931017 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.931034 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.931047 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:13Z","lastTransitionTime":"2026-02-23T13:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:13 crc kubenswrapper[4851]: I0223 13:09:13.981165 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:53:26.093994691 +0000 UTC Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.034166 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.034216 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.034230 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.034252 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.034262 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.135922 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.135962 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.135974 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.135991 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.136003 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.238288 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.238362 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.238375 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.238391 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.238401 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.341308 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.341351 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.341359 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.341371 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.341380 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.377210 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.377246 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.377256 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.377264 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.377272 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.377280 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.379309 4851 generic.go:334] "Generic (PLEG): container finished" podID="5bd228f9-317e-43a7-a9f8-473d69d93204" containerID="41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610" exitCode=0 Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.379350 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" event={"ID":"5bd228f9-317e-43a7-a9f8-473d69d93204","Type":"ContainerDied","Data":"41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.399002 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.411378 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.421965 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.433790 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.445268 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.445300 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.445310 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.445354 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.445366 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.447075 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.461424 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.477098 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.488164 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.500812 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.511778 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.524508 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.542666 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.547226 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.547260 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.547270 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.547287 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.547300 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.554652 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:14Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.649362 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.649405 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.649415 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.649431 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.649440 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.752028 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.752066 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.752076 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.752093 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.752104 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.861878 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.861972 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.861997 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.862033 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.862054 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.964846 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.964908 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.964920 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.964940 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.964954 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:14Z","lastTransitionTime":"2026-02-23T13:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.968277 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.968379 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.968319 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:14 crc kubenswrapper[4851]: E0223 13:09:14.968460 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:14 crc kubenswrapper[4851]: E0223 13:09:14.968693 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:14 crc kubenswrapper[4851]: E0223 13:09:14.968776 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:14 crc kubenswrapper[4851]: I0223 13:09:14.981486 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:30:04.636047594 +0000 UTC Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.069448 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.069496 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.069505 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.069520 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.069532 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:15Z","lastTransitionTime":"2026-02-23T13:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.172600 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.172671 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.172689 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.172718 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.172737 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:15Z","lastTransitionTime":"2026-02-23T13:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.275438 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.275486 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.275495 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.275511 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.275522 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:15Z","lastTransitionTime":"2026-02-23T13:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.382243 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.382281 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.382293 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.382314 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.382324 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:15Z","lastTransitionTime":"2026-02-23T13:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.389368 4851 generic.go:334] "Generic (PLEG): container finished" podID="5bd228f9-317e-43a7-a9f8-473d69d93204" containerID="7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa" exitCode=0 Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.389445 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" event={"ID":"5bd228f9-317e-43a7-a9f8-473d69d93204","Type":"ContainerDied","Data":"7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.433466 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.450642 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.464807 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.480012 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.485757 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.485801 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.485815 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.485842 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.485861 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:15Z","lastTransitionTime":"2026-02-23T13:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.496727 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.511244 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.526694 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.540516 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.554048 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.567618 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.581150 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.591082 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.591123 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.591135 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.591156 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.591172 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:15Z","lastTransitionTime":"2026-02-23T13:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.602937 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.619959 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.694480 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.694559 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.694580 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.694611 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.694634 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:15Z","lastTransitionTime":"2026-02-23T13:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.797864 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.797929 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.797948 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.797979 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.798001 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:15Z","lastTransitionTime":"2026-02-23T13:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.901182 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.901255 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.901275 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.901304 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.901362 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:15Z","lastTransitionTime":"2026-02-23T13:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.982566 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:42:23.606533802 +0000 UTC Feb 23 13:09:15 crc kubenswrapper[4851]: I0223 13:09:15.989792 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:15Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.004736 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.004880 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.004914 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.004951 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.004978 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.015878 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.031901 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.054309 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.074924 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.104581 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.108573 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.108596 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.108608 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.108629 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.108642 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.122808 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.141187 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.151632 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.167766 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.183752 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.203360 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.210874 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.210938 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.210950 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.210973 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.210986 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.215029 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.314144 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.314211 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.314231 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.314259 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.314277 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.396536 4851 generic.go:334] "Generic (PLEG): container finished" podID="5bd228f9-317e-43a7-a9f8-473d69d93204" containerID="b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011" exitCode=0 Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.396590 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" event={"ID":"5bd228f9-317e-43a7-a9f8-473d69d93204","Type":"ContainerDied","Data":"b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.414690 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.417956 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.417992 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.418004 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.418022 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.418034 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.444948 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.463854 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.480511 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.498606 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.515259 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.520369 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.520404 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.520415 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.520431 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.520445 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.534635 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.549132 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.564529 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.579460 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.594662 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.608443 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.623255 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.623288 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.623296 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.623311 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.623322 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.632115 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.725182 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.725224 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.725234 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.725250 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.725259 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.831416 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.831480 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.831513 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.831535 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.831555 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.850166 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.850207 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.850216 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.850232 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.850243 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: E0223 13:09:16.866307 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.872909 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.872973 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.872995 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.873013 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.873024 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: E0223 13:09:16.885115 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.890237 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.890540 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.890852 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.891123 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.891378 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: E0223 13:09:16.905075 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.909150 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.909209 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.909219 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.909239 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.909256 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: E0223 13:09:16.923430 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.928157 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.928190 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.928198 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.928212 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.928220 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: E0223 13:09:16.940882 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:16Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:16 crc kubenswrapper[4851]: E0223 13:09:16.941013 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.942561 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.942581 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.942588 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.942601 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.942609 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:16Z","lastTransitionTime":"2026-02-23T13:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.967835 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:16 crc kubenswrapper[4851]: E0223 13:09:16.967975 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.968095 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.968153 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:16 crc kubenswrapper[4851]: E0223 13:09:16.968182 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:16 crc kubenswrapper[4851]: E0223 13:09:16.968407 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:16 crc kubenswrapper[4851]: I0223 13:09:16.982965 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:28:03.663464932 +0000 UTC Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.046185 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.046245 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.046258 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.046282 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.046295 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.152971 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.153051 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.153073 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.153183 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.153213 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.255171 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.255200 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.255207 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.255219 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.255228 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.358056 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.358103 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.358116 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.358133 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.358569 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.401035 4851 generic.go:334] "Generic (PLEG): container finished" podID="5bd228f9-317e-43a7-a9f8-473d69d93204" containerID="c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1" exitCode=0 Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.401094 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" event={"ID":"5bd228f9-317e-43a7-a9f8-473d69d93204","Type":"ContainerDied","Data":"c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.407985 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.426229 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.442495 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.454516 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.461610 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.461646 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.461658 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.461706 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.461718 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.464738 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.475430 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.486778 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.499404 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.513498 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.527016 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.538029 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.547756 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.563743 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.563969 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.564028 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.564096 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.564150 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.565781 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.579008 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.667116 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.667164 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.667176 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.667195 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.667208 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.769238 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.769553 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.769582 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.769599 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.769610 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.873724 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.873757 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.873768 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.873788 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.873797 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.887286 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-chvdk"] Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.887933 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.889777 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.890273 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.890590 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.891048 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.912276 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.928267 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.941688 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.953123 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.964274 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.975514 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.976481 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.976522 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.976531 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.976544 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.976554 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:17Z","lastTransitionTime":"2026-02-23T13:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.983661 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:09:16.845227794 +0000 UTC Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.988135 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5e47bced-c5be-4d4f-9dee-0a992534dea2-serviceca\") pod \"node-ca-chvdk\" (UID: \"5e47bced-c5be-4d4f-9dee-0a992534dea2\") " pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.988307 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h64b\" (UniqueName: \"kubernetes.io/projected/5e47bced-c5be-4d4f-9dee-0a992534dea2-kube-api-access-6h64b\") pod \"node-ca-chvdk\" (UID: \"5e47bced-c5be-4d4f-9dee-0a992534dea2\") " pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.988429 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e47bced-c5be-4d4f-9dee-0a992534dea2-host\") pod \"node-ca-chvdk\" (UID: \"5e47bced-c5be-4d4f-9dee-0a992534dea2\") " pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:17 crc kubenswrapper[4851]: I0223 13:09:17.990967 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:17Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.006505 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.020925 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.037411 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.050433 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.069765 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.078951 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.079021 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.079040 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.079064 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.079086 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:18Z","lastTransitionTime":"2026-02-23T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.083202 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.089790 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5e47bced-c5be-4d4f-9dee-0a992534dea2-serviceca\") pod \"node-ca-chvdk\" (UID: \"5e47bced-c5be-4d4f-9dee-0a992534dea2\") " pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.089860 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h64b\" (UniqueName: \"kubernetes.io/projected/5e47bced-c5be-4d4f-9dee-0a992534dea2-kube-api-access-6h64b\") pod \"node-ca-chvdk\" (UID: \"5e47bced-c5be-4d4f-9dee-0a992534dea2\") " pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.089884 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e47bced-c5be-4d4f-9dee-0a992534dea2-host\") pod \"node-ca-chvdk\" (UID: \"5e47bced-c5be-4d4f-9dee-0a992534dea2\") " pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.089946 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e47bced-c5be-4d4f-9dee-0a992534dea2-host\") pod \"node-ca-chvdk\" (UID: \"5e47bced-c5be-4d4f-9dee-0a992534dea2\") " pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.091208 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5e47bced-c5be-4d4f-9dee-0a992534dea2-serviceca\") pod \"node-ca-chvdk\" (UID: \"5e47bced-c5be-4d4f-9dee-0a992534dea2\") " pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.093905 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.109182 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h64b\" (UniqueName: \"kubernetes.io/projected/5e47bced-c5be-4d4f-9dee-0a992534dea2-kube-api-access-6h64b\") pod \"node-ca-chvdk\" (UID: \"5e47bced-c5be-4d4f-9dee-0a992534dea2\") " pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.182584 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.182628 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.182642 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.182660 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.182672 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:18Z","lastTransitionTime":"2026-02-23T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.204573 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-chvdk" Feb 23 13:09:18 crc kubenswrapper[4851]: W0223 13:09:18.215892 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e47bced_c5be_4d4f_9dee_0a992534dea2.slice/crio-0847b85085a3954e9dde9801bb89b2aab0078916bb321f047deff99a07074edd WatchSource:0}: Error finding container 0847b85085a3954e9dde9801bb89b2aab0078916bb321f047deff99a07074edd: Status 404 returned error can't find the container with id 0847b85085a3954e9dde9801bb89b2aab0078916bb321f047deff99a07074edd Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.289398 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.289447 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.289459 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.289475 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.289485 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:18Z","lastTransitionTime":"2026-02-23T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.391493 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.391737 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.391748 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.391763 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.391797 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:18Z","lastTransitionTime":"2026-02-23T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.416246 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-chvdk" event={"ID":"5e47bced-c5be-4d4f-9dee-0a992534dea2","Type":"ContainerStarted","Data":"b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.416354 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-chvdk" event={"ID":"5e47bced-c5be-4d4f-9dee-0a992534dea2","Type":"ContainerStarted","Data":"0847b85085a3954e9dde9801bb89b2aab0078916bb321f047deff99a07074edd"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.421971 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" event={"ID":"5bd228f9-317e-43a7-a9f8-473d69d93204","Type":"ContainerStarted","Data":"3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.433799 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.466282 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.492443 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.493670 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.493714 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.493728 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.493744 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.493756 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:18Z","lastTransitionTime":"2026-02-23T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.519133 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.532357 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.544725 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.563445 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.577663 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.594315 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.596065 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.596131 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.596148 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.596176 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.596203 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:18Z","lastTransitionTime":"2026-02-23T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.612500 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.629515 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.644879 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.662424 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.675792 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.687966 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.698315 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.699071 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.699101 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.699111 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.699124 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.699134 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:18Z","lastTransitionTime":"2026-02-23T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.711281 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.724631 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.736821 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.749182 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.759433 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.775758 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.786734 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.796001 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.801165 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.801199 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.801210 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.801226 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.801239 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:18Z","lastTransitionTime":"2026-02-23T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.815511 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.829530 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.842198 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.852219 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:18Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.903153 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.903199 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.903210 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.903229 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.903243 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:18Z","lastTransitionTime":"2026-02-23T13:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.968710 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:18 crc kubenswrapper[4851]: E0223 13:09:18.968849 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.968727 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:18 crc kubenswrapper[4851]: E0223 13:09:18.968957 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.968706 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:18 crc kubenswrapper[4851]: E0223 13:09:18.969019 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:18 crc kubenswrapper[4851]: I0223 13:09:18.983989 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:51:00.688575997 +0000 UTC Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.005762 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.005799 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.005806 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.005821 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.005833 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.107505 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.107548 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.107557 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.107571 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.107582 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.209614 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.209665 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.209677 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.209698 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.209711 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.312151 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.312193 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.312201 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.312216 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.312226 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.414526 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.414585 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.414597 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.414618 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.414632 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.431683 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.432142 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.456562 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.462547 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.478274 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.495743 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.511532 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.517261 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.517306 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.517322 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.517362 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.517377 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.527823 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.541134 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.555373 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.571662 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.572426 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.588472 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.601474 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.612559 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.620852 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.620886 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.620898 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.620915 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.620927 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.645689 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.663059 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.677417 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.692560 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.705944 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.721854 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.724729 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.724750 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.724758 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.724772 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.724781 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.747192 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.760901 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.774606 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.789576 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.802342 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.818600 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.827068 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.827100 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.827109 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.827124 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.827217 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.831870 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.843299 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.864221 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.878955 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.889118 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:19Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.929591 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.929617 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.929626 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.929640 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.929650 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:19Z","lastTransitionTime":"2026-02-23T13:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:19 crc kubenswrapper[4851]: I0223 13:09:19.984244 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:47:08.978251821 +0000 UTC Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.031201 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.031232 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.031241 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.031253 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.031264 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.133697 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.133749 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.133763 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.133783 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.133795 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.236280 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.236354 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.236367 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.236383 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.236392 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.338937 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.338990 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.339001 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.339019 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.339031 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.434395 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.434443 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.442132 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.442195 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.442208 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.442225 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.442241 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.466181 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.483469 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.497765 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.511885 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.529636 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.542361 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.544541 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.544574 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.544582 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.544596 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.544609 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.552522 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.568289 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.583864 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.597580 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.621507 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.641361 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.646840 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.646885 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.646896 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.646915 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.646927 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.660963 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.678255 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.698304 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:20Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.749779 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.749834 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.749850 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.749868 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.749882 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.852790 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.853112 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.853225 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.853371 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.853497 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.957095 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.957527 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.957692 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.957828 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.958001 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:20Z","lastTransitionTime":"2026-02-23T13:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.968520 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.968606 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.968533 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:20 crc kubenswrapper[4851]: E0223 13:09:20.968657 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:20 crc kubenswrapper[4851]: E0223 13:09:20.968843 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:20 crc kubenswrapper[4851]: E0223 13:09:20.968920 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:20 crc kubenswrapper[4851]: I0223 13:09:20.985116 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:08:38.561389825 +0000 UTC Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.061058 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.061376 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.061487 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.061587 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.061671 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.164879 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.164984 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.165003 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.165097 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.165130 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.269646 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.269722 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.269749 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.269786 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.269811 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.374234 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.374280 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.374289 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.374305 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.374318 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.476949 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.477514 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.477578 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.477671 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.478031 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.582691 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.583484 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.583515 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.583554 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.583587 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.685683 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.685725 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.685738 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.685757 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.685772 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.788085 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.788130 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.788139 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.788154 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.788164 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.890445 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.890498 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.890517 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.890534 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.890592 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.985867 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:40:49.55486365 +0000 UTC Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.992686 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.992782 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.992794 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.992807 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:21 crc kubenswrapper[4851]: I0223 13:09:21.992818 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:21Z","lastTransitionTime":"2026-02-23T13:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.094635 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.094672 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.094681 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.094692 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.094701 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:22Z","lastTransitionTime":"2026-02-23T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.197058 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.197094 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.197102 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.197114 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.197122 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:22Z","lastTransitionTime":"2026-02-23T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.299184 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.299224 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.299235 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.299249 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.299261 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:22Z","lastTransitionTime":"2026-02-23T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.401955 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.401982 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.401991 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.402003 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.402013 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:22Z","lastTransitionTime":"2026-02-23T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.441480 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/0.log" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.443512 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389" exitCode=1 Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.443556 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.444165 4851 scope.go:117] "RemoveContainer" containerID="f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.465453 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.485278 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.504780 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.504813 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.504823 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.504836 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.504845 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:22Z","lastTransitionTime":"2026-02-23T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.505724 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.521424 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.538522 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.555389 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.573090 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.588930 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.603764 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.607518 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.607559 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.607586 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.607604 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.607613 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:22Z","lastTransitionTime":"2026-02-23T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.618074 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.627868 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.645270 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:21Z\\\",\\\"message\\\":\\\" 6561 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 13:09:21.817316 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 13:09:21.817345 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 13:09:21.817371 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 13:09:21.817393 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 13:09:21.817398 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 13:09:21.817440 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 13:09:21.817458 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 13:09:21.817477 6561 factory.go:656] Stopping watch factory\\\\nI0223 13:09:21.817493 6561 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:21.817514 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 13:09:21.817524 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 13:09:21.817530 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 13:09:21.817535 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 13:09:21.817541 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 13:09:21.817546 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 13:09:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.658600 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.672983 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:22Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.710423 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.710464 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.710476 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.710491 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.710500 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:22Z","lastTransitionTime":"2026-02-23T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.812793 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.812839 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.812850 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.812869 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.812882 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:22Z","lastTransitionTime":"2026-02-23T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.914917 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.914974 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.914985 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.915005 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.915020 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:22Z","lastTransitionTime":"2026-02-23T13:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.968444 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.968564 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.968656 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:22 crc kubenswrapper[4851]: E0223 13:09:22.968583 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:22 crc kubenswrapper[4851]: E0223 13:09:22.968789 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:22 crc kubenswrapper[4851]: E0223 13:09:22.968947 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:22 crc kubenswrapper[4851]: I0223 13:09:22.986918 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 01:12:37.702282028 +0000 UTC Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.018091 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.018136 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.018148 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.018169 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.018183 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.120293 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.120345 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.120356 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.120371 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.120380 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.223346 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.223384 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.223392 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.223406 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.223415 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.325993 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.326065 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.326077 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.326093 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.326103 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.428043 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.428083 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.428093 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.428108 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.428118 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.448265 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/0.log" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.450913 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.451343 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.468595 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.484379 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.500144 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.512275 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.523136 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.533069 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.533108 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.533119 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.533136 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.533146 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.540101 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.552862 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.564309 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.574911 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.584533 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.593053 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.609065 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:21Z\\\",\\\"message\\\":\\\" 6561 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 13:09:21.817316 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 13:09:21.817345 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 13:09:21.817371 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 13:09:21.817393 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 13:09:21.817398 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 13:09:21.817440 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 13:09:21.817458 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 13:09:21.817477 6561 factory.go:656] Stopping watch factory\\\\nI0223 13:09:21.817493 6561 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:21.817514 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 13:09:21.817524 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 13:09:21.817530 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 13:09:21.817535 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 13:09:21.817541 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 13:09:21.817546 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 13:09:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.621262 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.629519 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.635625 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.635658 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.635666 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.635679 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.635688 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.713202 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2"] Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.713683 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.715078 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.716092 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.731490 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.739306 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.739383 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.739399 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.739422 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.739439 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.746785 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.764988 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.776737 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.789582 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.802620 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.813620 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.823381 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.836463 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.841669 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.841725 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.841734 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.841750 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.841761 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.848531 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0cd8049b-7b79-49cb-9471-811e7651e400-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.848635 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0cd8049b-7b79-49cb-9471-811e7651e400-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.848679 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0cd8049b-7b79-49cb-9471-811e7651e400-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.848726 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsj29\" (UniqueName: \"kubernetes.io/projected/0cd8049b-7b79-49cb-9471-811e7651e400-kube-api-access-wsj29\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.861858 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.877310 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.893049 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.904391 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.918215 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.939115 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:21Z\\\",\\\"message\\\":\\\" 6561 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 13:09:21.817316 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 13:09:21.817345 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 13:09:21.817371 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 13:09:21.817393 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 13:09:21.817398 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 13:09:21.817440 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 13:09:21.817458 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 13:09:21.817477 6561 factory.go:656] Stopping watch factory\\\\nI0223 13:09:21.817493 6561 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:21.817514 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 13:09:21.817524 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 13:09:21.817530 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 13:09:21.817535 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 13:09:21.817541 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 13:09:21.817546 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 13:09:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.943909 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.943969 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.943982 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.943998 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.944009 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:23Z","lastTransitionTime":"2026-02-23T13:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.949239 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0cd8049b-7b79-49cb-9471-811e7651e400-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.949285 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0cd8049b-7b79-49cb-9471-811e7651e400-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.949320 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0cd8049b-7b79-49cb-9471-811e7651e400-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.949377 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsj29\" (UniqueName: \"kubernetes.io/projected/0cd8049b-7b79-49cb-9471-811e7651e400-kube-api-access-wsj29\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.950454 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0cd8049b-7b79-49cb-9471-811e7651e400-env-overrides\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.951011 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0cd8049b-7b79-49cb-9471-811e7651e400-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.958981 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0cd8049b-7b79-49cb-9471-811e7651e400-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.972136 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsj29\" (UniqueName: \"kubernetes.io/projected/0cd8049b-7b79-49cb-9471-811e7651e400-kube-api-access-wsj29\") pod \"ovnkube-control-plane-749d76644c-bscl2\" (UID: \"0cd8049b-7b79-49cb-9471-811e7651e400\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:23 crc kubenswrapper[4851]: I0223 13:09:23.987117 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 14:32:58.100543372 +0000 UTC Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.025243 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" Feb 23 13:09:24 crc kubenswrapper[4851]: W0223 13:09:24.039175 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cd8049b_7b79_49cb_9471_811e7651e400.slice/crio-307a8946661d6b3db0d2ca4fd351af256915d99823215fa2cde6d8113cf7463e WatchSource:0}: Error finding container 307a8946661d6b3db0d2ca4fd351af256915d99823215fa2cde6d8113cf7463e: Status 404 returned error can't find the container with id 307a8946661d6b3db0d2ca4fd351af256915d99823215fa2cde6d8113cf7463e Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.045970 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.045998 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.046006 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.046018 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.046026 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.150188 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.150219 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.150228 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.150242 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.150251 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.252891 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.252926 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.252957 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.252995 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.253005 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.355275 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.355305 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.355315 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.355342 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.355352 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.440455 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jt4wg"] Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.440889 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:24 crc kubenswrapper[4851]: E0223 13:09:24.440963 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.455641 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.457688 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.457719 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.457730 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.457746 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.457757 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.460645 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/1.log" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.461375 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/0.log" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.463310 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e" exitCode=1 Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.463366 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.463394 4851 scope.go:117] "RemoveContainer" containerID="f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.463975 4851 scope.go:117] "RemoveContainer" containerID="916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e" Feb 23 13:09:24 crc kubenswrapper[4851]: E0223 13:09:24.464125 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.466561 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" event={"ID":"0cd8049b-7b79-49cb-9471-811e7651e400","Type":"ContainerStarted","Data":"e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.466616 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" event={"ID":"0cd8049b-7b79-49cb-9471-811e7651e400","Type":"ContainerStarted","Data":"b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.466629 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" event={"ID":"0cd8049b-7b79-49cb-9471-811e7651e400","Type":"ContainerStarted","Data":"307a8946661d6b3db0d2ca4fd351af256915d99823215fa2cde6d8113cf7463e"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.469481 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.489692 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.502545 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.513096 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.522692 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.532828 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.543789 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.554301 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd5rs\" (UniqueName: \"kubernetes.io/projected/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-kube-api-access-gd5rs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.554394 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.556870 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.560169 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.560206 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.560215 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.560232 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.560242 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.571956 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.587305 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.634318 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.650053 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.655484 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd5rs\" (UniqueName: \"kubernetes.io/projected/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-kube-api-access-gd5rs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.655653 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:24 crc kubenswrapper[4851]: E0223 13:09:24.655786 4851 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:24 crc kubenswrapper[4851]: E0223 13:09:24.655868 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs podName:b88d393f-3f9d-4c95-b41b-10e998d5ca0f nodeName:}" failed. No retries permitted until 2026-02-23 13:09:25.155850755 +0000 UTC m=+119.837554473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs") pod "network-metrics-daemon-jt4wg" (UID: "b88d393f-3f9d-4c95-b41b-10e998d5ca0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.662874 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.663074 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.663209 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.662859 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.663305 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.663421 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.672077 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd5rs\" (UniqueName: \"kubernetes.io/projected/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-kube-api-access-gd5rs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.674714 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.693589 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:21Z\\\",\\\"message\\\":\\\" 6561 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 13:09:21.817316 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 13:09:21.817345 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 13:09:21.817371 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 13:09:21.817393 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 13:09:21.817398 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 13:09:21.817440 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 13:09:21.817458 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 13:09:21.817477 6561 factory.go:656] Stopping watch factory\\\\nI0223 13:09:21.817493 6561 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:21.817514 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 13:09:21.817524 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 13:09:21.817530 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 13:09:21.817535 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 13:09:21.817541 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 13:09:21.817546 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 13:09:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.705227 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.720535 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d80504f19186053718da5554a7c48a127f716428c737c38f7c570e9b612389\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:21Z\\\",\\\"message\\\":\\\" 6561 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 13:09:21.817316 6561 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 13:09:21.817345 6561 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 13:09:21.817371 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 13:09:21.817393 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 13:09:21.817398 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 13:09:21.817440 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 13:09:21.817458 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 13:09:21.817477 6561 factory.go:656] Stopping watch factory\\\\nI0223 13:09:21.817493 6561 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:21.817514 6561 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 13:09:21.817524 6561 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 13:09:21.817530 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 13:09:21.817535 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 13:09:21.817541 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 13:09:21.817546 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 13:09:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"andler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z]\\\\nI0223 13:09:23.284288 6701 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-snjvm in node crc\\\\nI0223 13:09:23.284292 6701 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.731638 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.741470 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.754548 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.763305 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.765632 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.765659 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.765669 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.765685 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.765698 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.772861 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.781559 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.798210 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.809773 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.820072 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.831060 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.841360 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.850970 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.864186 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.867081 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.867120 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.867131 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.867147 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.867159 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.875464 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:24Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.968303 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.968397 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:24 crc kubenswrapper[4851]: E0223 13:09:24.968435 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.968303 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:24 crc kubenswrapper[4851]: E0223 13:09:24.968527 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:24 crc kubenswrapper[4851]: E0223 13:09:24.968683 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.969527 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.969560 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.969571 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.969585 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.969597 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:24Z","lastTransitionTime":"2026-02-23T13:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:24 crc kubenswrapper[4851]: I0223 13:09:24.988075 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:57:43.967128897 +0000 UTC Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.071356 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.071392 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.071403 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.071420 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.071432 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:25Z","lastTransitionTime":"2026-02-23T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.159498 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:25 crc kubenswrapper[4851]: E0223 13:09:25.159702 4851 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:25 crc kubenswrapper[4851]: E0223 13:09:25.159795 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs podName:b88d393f-3f9d-4c95-b41b-10e998d5ca0f nodeName:}" failed. No retries permitted until 2026-02-23 13:09:26.159777131 +0000 UTC m=+120.841480809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs") pod "network-metrics-daemon-jt4wg" (UID: "b88d393f-3f9d-4c95-b41b-10e998d5ca0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.173169 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.173212 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.173224 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.173238 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.173247 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:25Z","lastTransitionTime":"2026-02-23T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.275276 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.275311 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.275323 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.275351 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.275361 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:25Z","lastTransitionTime":"2026-02-23T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.377802 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.377848 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.377859 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.377876 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.377887 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:25Z","lastTransitionTime":"2026-02-23T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.470517 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/1.log" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.473415 4851 scope.go:117] "RemoveContainer" containerID="916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e" Feb 23 13:09:25 crc kubenswrapper[4851]: E0223 13:09:25.473556 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.480626 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.480649 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.480657 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.480668 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.480677 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:25Z","lastTransitionTime":"2026-02-23T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.485307 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.495014 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.507150 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.519120 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.529296 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.539554 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.550463 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.559269 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.576098 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"andler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z]\\\\nI0223 13:09:23.284288 6701 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-snjvm in node crc\\\\nI0223 13:09:23.284292 6701 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.583475 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.583512 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.583523 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.583540 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.583551 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:25Z","lastTransitionTime":"2026-02-23T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.588969 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.600612 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.617490 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.631124 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.642828 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.654182 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.664734 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.685627 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.685668 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.685679 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.685695 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.685704 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:25Z","lastTransitionTime":"2026-02-23T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.788483 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.788527 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.788537 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.788555 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.788567 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:25Z","lastTransitionTime":"2026-02-23T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.890865 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.890898 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.890906 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.890919 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.890928 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:25Z","lastTransitionTime":"2026-02-23T13:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.967803 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:25 crc kubenswrapper[4851]: E0223 13:09:25.967916 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.981755 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.988648 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:40:17.645097646 +0000 UTC Feb 23 13:09:25 crc kubenswrapper[4851]: E0223 13:09:25.991822 4851 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 23 13:09:25 crc kubenswrapper[4851]: I0223 13:09:25.992785 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:25Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.002441 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.020620 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.036830 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.053237 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: E0223 13:09:26.056366 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.080435 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"andler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z]\\\\nI0223 13:09:23.284288 6701 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-snjvm in node crc\\\\nI0223 13:09:23.284292 6701 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.099244 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.114907 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.137771 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.150461 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.168152 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.168396 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:26 crc kubenswrapper[4851]: E0223 13:09:26.168808 4851 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:26 crc kubenswrapper[4851]: E0223 13:09:26.169024 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs podName:b88d393f-3f9d-4c95-b41b-10e998d5ca0f nodeName:}" failed. No retries permitted until 2026-02-23 13:09:28.168975882 +0000 UTC m=+122.850679750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs") pod "network-metrics-daemon-jt4wg" (UID: "b88d393f-3f9d-4c95-b41b-10e998d5ca0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.182612 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.202621 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.218607 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.231490 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:26Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.967842 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.967882 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.967905 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:26 crc kubenswrapper[4851]: E0223 13:09:26.968460 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:26 crc kubenswrapper[4851]: E0223 13:09:26.968561 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:26 crc kubenswrapper[4851]: E0223 13:09:26.968285 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:26 crc kubenswrapper[4851]: I0223 13:09:26.989287 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:54:11.63385485 +0000 UTC Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.186504 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.186539 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.186549 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.186564 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.186576 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:27Z","lastTransitionTime":"2026-02-23T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:27 crc kubenswrapper[4851]: E0223 13:09:27.198636 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:27Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.201557 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.201584 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.201593 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.201608 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.201618 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:27Z","lastTransitionTime":"2026-02-23T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:27 crc kubenswrapper[4851]: E0223 13:09:27.213773 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:27Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.216773 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.216800 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.216810 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.216825 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.216836 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:27Z","lastTransitionTime":"2026-02-23T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:27 crc kubenswrapper[4851]: E0223 13:09:27.227702 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:27Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.230837 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.230875 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.230884 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.230897 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.230907 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:27Z","lastTransitionTime":"2026-02-23T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:27 crc kubenswrapper[4851]: E0223 13:09:27.243367 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:27Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.247593 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.247637 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.247647 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.247683 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.247692 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:27Z","lastTransitionTime":"2026-02-23T13:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:27 crc kubenswrapper[4851]: E0223 13:09:27.258679 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:27Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:27 crc kubenswrapper[4851]: E0223 13:09:27.258791 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.967731 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:27 crc kubenswrapper[4851]: E0223 13:09:27.967876 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:27 crc kubenswrapper[4851]: I0223 13:09:27.990416 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 03:00:56.38239255 +0000 UTC Feb 23 13:09:28 crc kubenswrapper[4851]: I0223 13:09:28.190037 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:28 crc kubenswrapper[4851]: E0223 13:09:28.190163 4851 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:28 crc kubenswrapper[4851]: E0223 13:09:28.190232 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs podName:b88d393f-3f9d-4c95-b41b-10e998d5ca0f nodeName:}" failed. No retries permitted until 2026-02-23 13:09:32.190209808 +0000 UTC m=+126.871913486 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs") pod "network-metrics-daemon-jt4wg" (UID: "b88d393f-3f9d-4c95-b41b-10e998d5ca0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:28 crc kubenswrapper[4851]: I0223 13:09:28.968013 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:28 crc kubenswrapper[4851]: I0223 13:09:28.968093 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:28 crc kubenswrapper[4851]: E0223 13:09:28.968138 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:28 crc kubenswrapper[4851]: I0223 13:09:28.968170 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:28 crc kubenswrapper[4851]: E0223 13:09:28.968271 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:28 crc kubenswrapper[4851]: E0223 13:09:28.968319 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:28 crc kubenswrapper[4851]: I0223 13:09:28.990743 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:29:44.728653671 +0000 UTC Feb 23 13:09:29 crc kubenswrapper[4851]: I0223 13:09:29.968288 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:29 crc kubenswrapper[4851]: E0223 13:09:29.968454 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:29 crc kubenswrapper[4851]: I0223 13:09:29.991148 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:45:26.5720359 +0000 UTC Feb 23 13:09:30 crc kubenswrapper[4851]: I0223 13:09:30.968085 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:30 crc kubenswrapper[4851]: I0223 13:09:30.968195 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:30 crc kubenswrapper[4851]: E0223 13:09:30.968284 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:30 crc kubenswrapper[4851]: I0223 13:09:30.968221 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:30 crc kubenswrapper[4851]: E0223 13:09:30.968455 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:30 crc kubenswrapper[4851]: E0223 13:09:30.968527 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:30 crc kubenswrapper[4851]: I0223 13:09:30.991708 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 23:39:47.437782062 +0000 UTC Feb 23 13:09:31 crc kubenswrapper[4851]: E0223 13:09:31.058441 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:09:31 crc kubenswrapper[4851]: I0223 13:09:31.968239 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:31 crc kubenswrapper[4851]: E0223 13:09:31.968399 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:31 crc kubenswrapper[4851]: I0223 13:09:31.992430 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 11:29:09.774676779 +0000 UTC Feb 23 13:09:32 crc kubenswrapper[4851]: I0223 13:09:32.235167 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:32 crc kubenswrapper[4851]: E0223 13:09:32.235571 4851 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:32 crc kubenswrapper[4851]: E0223 13:09:32.235779 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs podName:b88d393f-3f9d-4c95-b41b-10e998d5ca0f nodeName:}" failed. No retries permitted until 2026-02-23 13:09:40.235721642 +0000 UTC m=+134.917425420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs") pod "network-metrics-daemon-jt4wg" (UID: "b88d393f-3f9d-4c95-b41b-10e998d5ca0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:32 crc kubenswrapper[4851]: I0223 13:09:32.968253 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:32 crc kubenswrapper[4851]: I0223 13:09:32.968292 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:32 crc kubenswrapper[4851]: E0223 13:09:32.968545 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:32 crc kubenswrapper[4851]: I0223 13:09:32.968581 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:32 crc kubenswrapper[4851]: E0223 13:09:32.968608 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:32 crc kubenswrapper[4851]: E0223 13:09:32.968774 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:32 crc kubenswrapper[4851]: I0223 13:09:32.993611 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 23:52:59.579996986 +0000 UTC Feb 23 13:09:33 crc kubenswrapper[4851]: I0223 13:09:33.968480 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:33 crc kubenswrapper[4851]: E0223 13:09:33.968639 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:33 crc kubenswrapper[4851]: I0223 13:09:33.994655 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:35:11.529855365 +0000 UTC Feb 23 13:09:34 crc kubenswrapper[4851]: I0223 13:09:34.968270 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:34 crc kubenswrapper[4851]: I0223 13:09:34.968312 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:34 crc kubenswrapper[4851]: I0223 13:09:34.968270 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:34 crc kubenswrapper[4851]: E0223 13:09:34.968432 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:34 crc kubenswrapper[4851]: E0223 13:09:34.968516 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:34 crc kubenswrapper[4851]: E0223 13:09:34.968570 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:34 crc kubenswrapper[4851]: I0223 13:09:34.995021 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:40:23.097432445 +0000 UTC Feb 23 13:09:35 crc kubenswrapper[4851]: I0223 13:09:35.968523 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:35 crc kubenswrapper[4851]: E0223 13:09:35.968799 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:35 crc kubenswrapper[4851]: I0223 13:09:35.970282 4851 scope.go:117] "RemoveContainer" containerID="916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e" Feb 23 13:09:35 crc kubenswrapper[4851]: I0223 13:09:35.996039 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:31:25.380635341 +0000 UTC Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.000214 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:35Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.016085 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.035260 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.049146 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: E0223 13:09:36.060025 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.062028 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.076747 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.094109 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.109535 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.123753 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.136444 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.148505 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.158305 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.166830 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.183188 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"andler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z]\\\\nI0223 13:09:23.284288 6701 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-snjvm in node crc\\\\nI0223 13:09:23.284292 6701 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.196672 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.206792 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.512466 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/1.log" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.514852 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd"} Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.515309 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.534584 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.547105 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.560240 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.570952 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.585735 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.597200 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.606155 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.623598 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"andler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z]\\\\nI0223 13:09:23.284288 6701 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-snjvm in node crc\\\\nI0223 13:09:23.284292 6701 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.635187 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.645473 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.654539 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.664610 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.674374 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.685343 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.706277 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.720034 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:36Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.967664 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.967702 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.967779 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:36 crc kubenswrapper[4851]: E0223 13:09:36.967803 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:36 crc kubenswrapper[4851]: E0223 13:09:36.967878 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:36 crc kubenswrapper[4851]: E0223 13:09:36.967936 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:36 crc kubenswrapper[4851]: I0223 13:09:36.997302 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:08:25.817463669 +0000 UTC Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.487907 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.487942 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.487952 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.487968 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.487976 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:37Z","lastTransitionTime":"2026-02-23T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:37 crc kubenswrapper[4851]: E0223 13:09:37.501547 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.504432 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.504468 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.504479 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.504496 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.504509 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:37Z","lastTransitionTime":"2026-02-23T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:37 crc kubenswrapper[4851]: E0223 13:09:37.515297 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.518140 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.518185 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.518197 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.518213 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.518225 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:37Z","lastTransitionTime":"2026-02-23T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.520091 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/2.log" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.520979 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/1.log" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.523934 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd" exitCode=1 Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.524005 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd"} Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.524076 4851 scope.go:117] "RemoveContainer" containerID="916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.524697 4851 scope.go:117] "RemoveContainer" containerID="1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd" Feb 23 13:09:37 crc kubenswrapper[4851]: E0223 13:09:37.524886 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" Feb 23 13:09:37 crc kubenswrapper[4851]: E0223 13:09:37.529855 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.532958 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.533001 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.533018 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.533042 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.533058 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:37Z","lastTransitionTime":"2026-02-23T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.539778 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: E0223 13:09:37.546014 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.549658 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.549682 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.549690 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.549703 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.549712 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:37Z","lastTransitionTime":"2026-02-23T13:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:37 crc kubenswrapper[4851]: E0223 13:09:37.560951 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: E0223 13:09:37.561153 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.562974 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.577994 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.590984 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.607810 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.627893 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.643524 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.658744 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.680537 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.695771 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.721362 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://916fb75e383806ca11efc082c843f6e423d5015e61a4649307ad5fcde63ff79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"message\\\":\\\"andler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:23Z is after 2025-08-24T17:21:41Z]\\\\nI0223 13:09:23.284288 6701 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-snjvm in node crc\\\\nI0223 13:09:23.284292 6701 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:36Z\\\",\\\"message\\\":\\\" 6935 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893532 6935 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893692 6935 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894068 6935 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.894083 6935 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894172 6935 factory.go:656] Stopping watch factory\\\\nI0223 13:09:36.914185 6935 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0223 13:09:36.914224 6935 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0223 13:09:36.914292 6935 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:36.914315 6935 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0223 13:09:36.914453 6935 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.741799 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.757619 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.776175 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.794048 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.807622 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:37Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.968759 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:37 crc kubenswrapper[4851]: E0223 13:09:37.968970 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:37 crc kubenswrapper[4851]: I0223 13:09:37.997752 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:18:06.236094448 +0000 UTC Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.531743 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/2.log" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.537538 4851 scope.go:117] "RemoveContainer" containerID="1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd" Feb 23 13:09:38 crc kubenswrapper[4851]: E0223 13:09:38.537828 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.557604 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.578626 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.596874 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.629097 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.648366 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.670478 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.687655 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.706407 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.722602 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.744032 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.759201 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.770740 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.789101 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:36Z\\\",\\\"message\\\":\\\" 6935 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893532 6935 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893692 6935 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894068 6935 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.894083 6935 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894172 6935 factory.go:656] Stopping watch factory\\\\nI0223 13:09:36.914185 6935 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0223 13:09:36.914224 6935 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0223 13:09:36.914292 6935 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:36.914315 6935 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0223 13:09:36.914453 6935 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.805508 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.816976 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.827077 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:38Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.968133 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.968186 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.968148 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:38 crc kubenswrapper[4851]: E0223 13:09:38.968289 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:38 crc kubenswrapper[4851]: E0223 13:09:38.968402 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:38 crc kubenswrapper[4851]: E0223 13:09:38.968486 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.980009 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 23 13:09:38 crc kubenswrapper[4851]: I0223 13:09:38.998832 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 23:36:09.69915342 +0000 UTC Feb 23 13:09:39 crc kubenswrapper[4851]: I0223 13:09:39.968291 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:39 crc kubenswrapper[4851]: E0223 13:09:39.968499 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:39 crc kubenswrapper[4851]: I0223 13:09:39.978541 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 13:09:39 crc kubenswrapper[4851]: I0223 13:09:39.999213 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 21:26:58.847126161 +0000 UTC Feb 23 13:09:40 crc kubenswrapper[4851]: I0223 13:09:40.319460 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:40 crc kubenswrapper[4851]: E0223 13:09:40.319679 4851 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:40 crc kubenswrapper[4851]: E0223 13:09:40.319767 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs podName:b88d393f-3f9d-4c95-b41b-10e998d5ca0f nodeName:}" failed. No retries permitted until 2026-02-23 13:09:56.319749536 +0000 UTC m=+151.001453214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs") pod "network-metrics-daemon-jt4wg" (UID: "b88d393f-3f9d-4c95-b41b-10e998d5ca0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:40 crc kubenswrapper[4851]: I0223 13:09:40.968507 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:40 crc kubenswrapper[4851]: I0223 13:09:40.968574 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:40 crc kubenswrapper[4851]: E0223 13:09:40.968664 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:40 crc kubenswrapper[4851]: I0223 13:09:40.968542 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:40 crc kubenswrapper[4851]: E0223 13:09:40.968853 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:40 crc kubenswrapper[4851]: E0223 13:09:40.968946 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:41 crc kubenswrapper[4851]: I0223 13:09:40.999856 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:51:26.927586909 +0000 UTC Feb 23 13:09:41 crc kubenswrapper[4851]: E0223 13:09:41.061774 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:09:41 crc kubenswrapper[4851]: I0223 13:09:41.967973 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:41 crc kubenswrapper[4851]: E0223 13:09:41.968159 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:42 crc kubenswrapper[4851]: I0223 13:09:42.001011 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:07:11.933722035 +0000 UTC Feb 23 13:09:42 crc kubenswrapper[4851]: I0223 13:09:42.843517 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.843779 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:10:46.843748875 +0000 UTC m=+201.525452563 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:09:42 crc kubenswrapper[4851]: I0223 13:09:42.843886 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:42 crc kubenswrapper[4851]: I0223 13:09:42.843993 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.844013 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.844050 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.844098 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:46.844082444 +0000 UTC m=+201.525786202 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.844114 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:46.844106205 +0000 UTC m=+201.525809883 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:09:42 crc kubenswrapper[4851]: I0223 13:09:42.945296 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:42 crc kubenswrapper[4851]: I0223 13:09:42.945426 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.945472 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.945494 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.945504 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.945558 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:46.945541938 +0000 UTC m=+201.627245616 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.945608 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.945636 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.945652 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.945729 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:10:46.945704363 +0000 UTC m=+201.627408071 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:09:42 crc kubenswrapper[4851]: I0223 13:09:42.967968 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:42 crc kubenswrapper[4851]: I0223 13:09:42.967990 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:42 crc kubenswrapper[4851]: I0223 13:09:42.968023 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.968092 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.968174 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:42 crc kubenswrapper[4851]: E0223 13:09:42.968273 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:43 crc kubenswrapper[4851]: I0223 13:09:43.001613 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 14:28:00.358126998 +0000 UTC Feb 23 13:09:43 crc kubenswrapper[4851]: I0223 13:09:43.968092 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:43 crc kubenswrapper[4851]: E0223 13:09:43.968284 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:44 crc kubenswrapper[4851]: I0223 13:09:44.002470 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 10:01:02.285487188 +0000 UTC Feb 23 13:09:44 crc kubenswrapper[4851]: I0223 13:09:44.967802 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:44 crc kubenswrapper[4851]: I0223 13:09:44.967859 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:44 crc kubenswrapper[4851]: I0223 13:09:44.967825 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:44 crc kubenswrapper[4851]: E0223 13:09:44.967966 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:44 crc kubenswrapper[4851]: E0223 13:09:44.968028 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:44 crc kubenswrapper[4851]: E0223 13:09:44.968077 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:45 crc kubenswrapper[4851]: I0223 13:09:45.003482 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:02:29.66083955 +0000 UTC Feb 23 13:09:45 crc kubenswrapper[4851]: I0223 13:09:45.968154 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:45 crc kubenswrapper[4851]: E0223 13:09:45.968596 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:45 crc kubenswrapper[4851]: I0223 13:09:45.985203 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:45Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:45 crc kubenswrapper[4851]: I0223 13:09:45.997725 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:45Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.004179 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:17:33.568448287 +0000 UTC Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.011951 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.035719 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.049954 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: E0223 13:09:46.062444 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.063666 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.079222 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.097386 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.111197 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.127305 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.143215 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.156539 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.177431 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:36Z\\\",\\\"message\\\":\\\" 6935 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893532 6935 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893692 6935 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894068 6935 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.894083 6935 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894172 6935 factory.go:656] Stopping watch factory\\\\nI0223 13:09:36.914185 6935 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0223 13:09:36.914224 6935 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0223 13:09:36.914292 6935 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:36.914315 6935 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0223 13:09:36.914453 6935 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.189784 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1f5cef8-3562-422d-99bb-28534ec3493a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bca4a5c072768706140bbe965a5b2fdcfaf1e4b06f2f9043f1a08efe2717fe0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:07:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 13:07:28.208218 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 13:07:28.210715 1 observer_polling.go:159] Starting file observer\\\\nI0223 13:07:28.326923 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 13:07:28.345176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 13:07:52.054537 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 13:07:52.054620 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e5509ea096a7c151dfda3739dbbbe80101948468b414e6651996230399b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b066313052e450595bc08b10fa6316bfd3dd51d9c531f6c771a3a5ac138d785b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.201716 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"330d0749-a2d2-43c8-91c5-4defa068ac6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d34ef95174e2912058b7d4a786eb1dcbee4f58993800508fde8ff98e692869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efba4302294164dc5afd345375abda413072c51da085246a3a27c4e727a2c0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e437301c1ee7791c731beb8f4213ee36b7eb4e71d9141b1e618c740f554202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.212996 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.223046 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.233058 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:46Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.968634 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.968668 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:46 crc kubenswrapper[4851]: I0223 13:09:46.968713 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:46 crc kubenswrapper[4851]: E0223 13:09:46.968791 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:46 crc kubenswrapper[4851]: E0223 13:09:46.968939 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:46 crc kubenswrapper[4851]: E0223 13:09:46.968985 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.005211 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 14:13:17.890298612 +0000 UTC Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.612798 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.612840 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.612853 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.612875 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.612891 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:47Z","lastTransitionTime":"2026-02-23T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:47 crc kubenswrapper[4851]: E0223 13:09:47.628280 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:47Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.632388 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.632437 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.632446 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.632458 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.632468 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:47Z","lastTransitionTime":"2026-02-23T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:47 crc kubenswrapper[4851]: E0223 13:09:47.649921 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:47Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.654523 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.654595 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.654621 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.654650 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.654670 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:47Z","lastTransitionTime":"2026-02-23T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:47 crc kubenswrapper[4851]: E0223 13:09:47.669618 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:47Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.674726 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.674764 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.674773 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.674786 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.674796 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:47Z","lastTransitionTime":"2026-02-23T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:47 crc kubenswrapper[4851]: E0223 13:09:47.693917 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:47Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.699271 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.699304 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.699314 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.699348 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.699359 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:47Z","lastTransitionTime":"2026-02-23T13:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:47 crc kubenswrapper[4851]: E0223 13:09:47.716089 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:47Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:47 crc kubenswrapper[4851]: E0223 13:09:47.716304 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:09:47 crc kubenswrapper[4851]: I0223 13:09:47.967712 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:47 crc kubenswrapper[4851]: E0223 13:09:47.967881 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:48 crc kubenswrapper[4851]: I0223 13:09:48.005823 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:28:27.054373732 +0000 UTC Feb 23 13:09:48 crc kubenswrapper[4851]: I0223 13:09:48.968499 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:48 crc kubenswrapper[4851]: I0223 13:09:48.968555 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:48 crc kubenswrapper[4851]: I0223 13:09:48.968551 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:48 crc kubenswrapper[4851]: E0223 13:09:48.968744 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:48 crc kubenswrapper[4851]: E0223 13:09:48.968928 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:48 crc kubenswrapper[4851]: E0223 13:09:48.969072 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:49 crc kubenswrapper[4851]: I0223 13:09:49.006308 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:59:29.388924897 +0000 UTC Feb 23 13:09:49 crc kubenswrapper[4851]: I0223 13:09:49.967961 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:49 crc kubenswrapper[4851]: E0223 13:09:49.968202 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:49 crc kubenswrapper[4851]: I0223 13:09:49.969491 4851 scope.go:117] "RemoveContainer" containerID="1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd" Feb 23 13:09:49 crc kubenswrapper[4851]: E0223 13:09:49.969974 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" Feb 23 13:09:50 crc kubenswrapper[4851]: I0223 13:09:50.006528 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:26:55.063988167 +0000 UTC Feb 23 13:09:50 crc kubenswrapper[4851]: I0223 13:09:50.968296 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:50 crc kubenswrapper[4851]: I0223 13:09:50.968400 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:50 crc kubenswrapper[4851]: I0223 13:09:50.968450 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:50 crc kubenswrapper[4851]: E0223 13:09:50.968512 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:50 crc kubenswrapper[4851]: E0223 13:09:50.968692 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:50 crc kubenswrapper[4851]: E0223 13:09:50.968895 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:51 crc kubenswrapper[4851]: I0223 13:09:51.007104 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:18:09.704627363 +0000 UTC Feb 23 13:09:51 crc kubenswrapper[4851]: E0223 13:09:51.063832 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:09:51 crc kubenswrapper[4851]: I0223 13:09:51.968584 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:51 crc kubenswrapper[4851]: E0223 13:09:51.968842 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:52 crc kubenswrapper[4851]: I0223 13:09:52.007819 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:23:36.124178443 +0000 UTC Feb 23 13:09:52 crc kubenswrapper[4851]: I0223 13:09:52.968248 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:52 crc kubenswrapper[4851]: E0223 13:09:52.968388 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:52 crc kubenswrapper[4851]: I0223 13:09:52.968456 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:52 crc kubenswrapper[4851]: I0223 13:09:52.968470 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:52 crc kubenswrapper[4851]: E0223 13:09:52.968614 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:52 crc kubenswrapper[4851]: E0223 13:09:52.968945 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:53 crc kubenswrapper[4851]: I0223 13:09:53.008851 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:05:14.290691161 +0000 UTC Feb 23 13:09:53 crc kubenswrapper[4851]: I0223 13:09:53.968516 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:53 crc kubenswrapper[4851]: E0223 13:09:53.968783 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:54 crc kubenswrapper[4851]: I0223 13:09:54.009454 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 04:42:42.430559877 +0000 UTC Feb 23 13:09:54 crc kubenswrapper[4851]: I0223 13:09:54.968326 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:54 crc kubenswrapper[4851]: I0223 13:09:54.968464 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:54 crc kubenswrapper[4851]: E0223 13:09:54.968570 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:54 crc kubenswrapper[4851]: I0223 13:09:54.968627 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:54 crc kubenswrapper[4851]: E0223 13:09:54.968765 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:54 crc kubenswrapper[4851]: E0223 13:09:54.968915 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:55 crc kubenswrapper[4851]: I0223 13:09:55.010092 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:05:18.797229849 +0000 UTC Feb 23 13:09:55 crc kubenswrapper[4851]: I0223 13:09:55.968405 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:55 crc kubenswrapper[4851]: E0223 13:09:55.968595 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:55 crc kubenswrapper[4851]: I0223 13:09:55.990185 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:55Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.005662 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.010234 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:59:48.740940024 +0000 UTC Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.017710 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.037085 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.052473 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.062960 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: E0223 13:09:56.064525 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.072799 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.085511 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.101006 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.119837 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.135095 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.147742 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.192662 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:36Z\\\",\\\"message\\\":\\\" 6935 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893532 6935 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893692 6935 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894068 6935 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.894083 6935 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894172 6935 factory.go:656] Stopping watch factory\\\\nI0223 13:09:36.914185 6935 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0223 13:09:36.914224 6935 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0223 13:09:36.914292 6935 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:36.914315 6935 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0223 13:09:36.914453 6935 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.218963 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1f5cef8-3562-422d-99bb-28534ec3493a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bca4a5c072768706140bbe965a5b2fdcfaf1e4b06f2f9043f1a08efe2717fe0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:07:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 13:07:28.208218 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 13:07:28.210715 1 observer_polling.go:159] Starting file observer\\\\nI0223 13:07:28.326923 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 13:07:28.345176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 13:07:52.054537 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 13:07:52.054620 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e5509ea096a7c151dfda3739dbbbe80101948468b414e6651996230399b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b066313052e450595bc08b10fa6316bfd3dd51d9c531f6c771a3a5ac138d785b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.233383 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"330d0749-a2d2-43c8-91c5-4defa068ac6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d34ef95174e2912058b7d4a786eb1dcbee4f58993800508fde8ff98e692869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efba4302294164dc5afd345375abda413072c51da085246a3a27c4e727a2c0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e437301c1ee7791c731beb8f4213ee36b7eb4e71d9141b1e618c740f554202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.245123 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.257056 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.266808 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:56Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.403610 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:56 crc kubenswrapper[4851]: E0223 13:09:56.403922 4851 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:56 crc kubenswrapper[4851]: E0223 13:09:56.404099 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs podName:b88d393f-3f9d-4c95-b41b-10e998d5ca0f nodeName:}" failed. No retries permitted until 2026-02-23 13:10:28.404054652 +0000 UTC m=+183.085758500 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs") pod "network-metrics-daemon-jt4wg" (UID: "b88d393f-3f9d-4c95-b41b-10e998d5ca0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.968644 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.968645 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:56 crc kubenswrapper[4851]: E0223 13:09:56.968908 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:56 crc kubenswrapper[4851]: I0223 13:09:56.968644 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:56 crc kubenswrapper[4851]: E0223 13:09:56.969011 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:56 crc kubenswrapper[4851]: E0223 13:09:56.969114 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.011012 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:53:00.216940817 +0000 UTC Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.898173 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.898263 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.898448 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.899060 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.899146 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:57Z","lastTransitionTime":"2026-02-23T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:57 crc kubenswrapper[4851]: E0223 13:09:57.917030 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:57Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.926317 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.926381 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.926391 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.926407 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.926418 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:57Z","lastTransitionTime":"2026-02-23T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:57 crc kubenswrapper[4851]: E0223 13:09:57.942349 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:57Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.947399 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.947435 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.947445 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.947462 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.947473 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:57Z","lastTransitionTime":"2026-02-23T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:57 crc kubenswrapper[4851]: E0223 13:09:57.962934 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:57Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.967558 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.967595 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.967605 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.967623 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.967644 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:57Z","lastTransitionTime":"2026-02-23T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.968604 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:57 crc kubenswrapper[4851]: E0223 13:09:57.968757 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:09:57 crc kubenswrapper[4851]: E0223 13:09:57.980773 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:57Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.985281 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.985351 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.985365 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.985384 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:09:57 crc kubenswrapper[4851]: I0223 13:09:57.985398 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:09:57Z","lastTransitionTime":"2026-02-23T13:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:09:58 crc kubenswrapper[4851]: E0223 13:09:58.002801 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ce3a304-1be6-4250-ba8b-ce6e05e05ddb\\\",\\\"systemUUID\\\":\\\"147f526a-cf20-4c21-b33c-eacf21a9553b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:58Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:58 crc kubenswrapper[4851]: E0223 13:09:58.002922 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:09:58 crc kubenswrapper[4851]: I0223 13:09:58.011877 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:06:15.966855115 +0000 UTC Feb 23 13:09:58 crc kubenswrapper[4851]: I0223 13:09:58.967938 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:09:58 crc kubenswrapper[4851]: I0223 13:09:58.967971 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:09:58 crc kubenswrapper[4851]: I0223 13:09:58.967979 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:09:58 crc kubenswrapper[4851]: E0223 13:09:58.968055 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:09:58 crc kubenswrapper[4851]: E0223 13:09:58.968197 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:09:58 crc kubenswrapper[4851]: E0223 13:09:58.968358 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.012559 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:09:19.109706605 +0000 UTC Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.607449 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/0.log" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.607494 4851 generic.go:334] "Generic (PLEG): container finished" podID="d14644c4-9d6f-4a06-bc4a-85795d4be4cd" containerID="61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb" exitCode=1 Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.607527 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7cvl" event={"ID":"d14644c4-9d6f-4a06-bc4a-85795d4be4cd","Type":"ContainerDied","Data":"61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb"} Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.607924 4851 scope.go:117] "RemoveContainer" containerID="61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.627634 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.656636 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.674648 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.687161 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.698397 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.711893 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.724190 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.741045 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.757320 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:58Z\\\",\\\"message\\\":\\\"2026-02-23T13:09:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_578bea46-36bf-4a1b-9081-5419fa07d0fd\\\\n2026-02-23T13:09:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_578bea46-36bf-4a1b-9081-5419fa07d0fd to /host/opt/cni/bin/\\\\n2026-02-23T13:09:13Z [verbose] multus-daemon started\\\\n2026-02-23T13:09:13Z [verbose] Readiness Indicator file check\\\\n2026-02-23T13:09:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.767545 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.792319 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:36Z\\\",\\\"message\\\":\\\" 6935 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893532 6935 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893692 6935 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894068 6935 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.894083 6935 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894172 6935 factory.go:656] Stopping watch factory\\\\nI0223 13:09:36.914185 6935 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0223 13:09:36.914224 6935 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0223 13:09:36.914292 6935 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:36.914315 6935 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0223 13:09:36.914453 6935 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.806594 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1f5cef8-3562-422d-99bb-28534ec3493a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bca4a5c072768706140bbe965a5b2fdcfaf1e4b06f2f9043f1a08efe2717fe0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:07:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 13:07:28.208218 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 13:07:28.210715 1 observer_polling.go:159] Starting file observer\\\\nI0223 13:07:28.326923 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 13:07:28.345176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 13:07:52.054537 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 13:07:52.054620 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e5509ea096a7c151dfda3739dbbbe80101948468b414e6651996230399b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b066313052e450595bc08b10fa6316bfd3dd51d9c531f6c771a3a5ac138d785b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.821404 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"330d0749-a2d2-43c8-91c5-4defa068ac6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d34ef95174e2912058b7d4a786eb1dcbee4f58993800508fde8ff98e692869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efba4302294164dc5afd345375abda413072c51da085246a3a27c4e727a2c0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e437301c1ee7791c731beb8f4213ee36b7eb4e71d9141b1e618c740f554202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.839969 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.855555 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.866732 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.886861 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.903528 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:09:59Z is after 2025-08-24T17:21:41Z" Feb 23 13:09:59 crc kubenswrapper[4851]: I0223 13:09:59.967993 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:09:59 crc kubenswrapper[4851]: E0223 13:09:59.968153 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.013357 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:48:26.836447244 +0000 UTC Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.613562 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/0.log" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.613910 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7cvl" event={"ID":"d14644c4-9d6f-4a06-bc4a-85795d4be4cd","Type":"ContainerStarted","Data":"f23e3112452e76d2708be5f07b2c788533677d8137785411dba75d1a469195d3"} Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.630673 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.646145 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.662138 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.676637 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.696599 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.713298 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.727632 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23e3112452e76d2708be5f07b2c788533677d8137785411dba75d1a469195d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:58Z\\\",\\\"message\\\":\\\"2026-02-23T13:09:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_578bea46-36bf-4a1b-9081-5419fa07d0fd\\\\n2026-02-23T13:09:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_578bea46-36bf-4a1b-9081-5419fa07d0fd to /host/opt/cni/bin/\\\\n2026-02-23T13:09:13Z [verbose] multus-daemon started\\\\n2026-02-23T13:09:13Z [verbose] Readiness Indicator file check\\\\n2026-02-23T13:09:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.738678 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.751137 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.764179 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.778396 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.792491 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.801220 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.818288 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:36Z\\\",\\\"message\\\":\\\" 6935 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893532 6935 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893692 6935 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894068 6935 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.894083 6935 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894172 6935 factory.go:656] Stopping watch factory\\\\nI0223 13:09:36.914185 6935 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0223 13:09:36.914224 6935 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0223 13:09:36.914292 6935 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:36.914315 6935 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0223 13:09:36.914453 6935 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.831552 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1f5cef8-3562-422d-99bb-28534ec3493a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bca4a5c072768706140bbe965a5b2fdcfaf1e4b06f2f9043f1a08efe2717fe0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:07:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 13:07:28.208218 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 13:07:28.210715 1 observer_polling.go:159] Starting file observer\\\\nI0223 13:07:28.326923 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 13:07:28.345176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 13:07:52.054537 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 13:07:52.054620 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e5509ea096a7c151dfda3739dbbbe80101948468b414e6651996230399b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b066313052e450595bc08b10fa6316bfd3dd51d9c531f6c771a3a5ac138d785b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.842976 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"330d0749-a2d2-43c8-91c5-4defa068ac6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d34ef95174e2912058b7d4a786eb1dcbee4f58993800508fde8ff98e692869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efba4302294164dc5afd345375abda413072c51da085246a3a27c4e727a2c0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e437301c1ee7791c731beb8f4213ee36b7eb4e71d9141b1e618c740f554202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.856222 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.866074 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:00Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.967955 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.968039 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.968066 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:00 crc kubenswrapper[4851]: E0223 13:10:00.968176 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:00 crc kubenswrapper[4851]: E0223 13:10:00.968271 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:00 crc kubenswrapper[4851]: E0223 13:10:00.968536 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:00 crc kubenswrapper[4851]: I0223 13:10:00.978440 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 23 13:10:01 crc kubenswrapper[4851]: I0223 13:10:01.014195 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:18:01.848527863 +0000 UTC Feb 23 13:10:01 crc kubenswrapper[4851]: E0223 13:10:01.066283 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:01 crc kubenswrapper[4851]: I0223 13:10:01.967786 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:01 crc kubenswrapper[4851]: E0223 13:10:01.967989 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:02 crc kubenswrapper[4851]: I0223 13:10:02.014758 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 19:18:13.723795608 +0000 UTC Feb 23 13:10:02 crc kubenswrapper[4851]: I0223 13:10:02.968527 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:02 crc kubenswrapper[4851]: I0223 13:10:02.968625 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:02 crc kubenswrapper[4851]: I0223 13:10:02.968699 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:02 crc kubenswrapper[4851]: E0223 13:10:02.968811 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:02 crc kubenswrapper[4851]: E0223 13:10:02.969111 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:02 crc kubenswrapper[4851]: E0223 13:10:02.969011 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:03 crc kubenswrapper[4851]: I0223 13:10:03.015191 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:27:17.753823706 +0000 UTC Feb 23 13:10:03 crc kubenswrapper[4851]: I0223 13:10:03.968360 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:03 crc kubenswrapper[4851]: E0223 13:10:03.968477 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:04 crc kubenswrapper[4851]: I0223 13:10:04.016195 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:52:52.027974756 +0000 UTC Feb 23 13:10:04 crc kubenswrapper[4851]: I0223 13:10:04.968420 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:04 crc kubenswrapper[4851]: I0223 13:10:04.968459 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:04 crc kubenswrapper[4851]: E0223 13:10:04.968546 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:04 crc kubenswrapper[4851]: I0223 13:10:04.968431 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:04 crc kubenswrapper[4851]: E0223 13:10:04.968934 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:04 crc kubenswrapper[4851]: E0223 13:10:04.969012 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:04 crc kubenswrapper[4851]: I0223 13:10:04.969363 4851 scope.go:117] "RemoveContainer" containerID="1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.017079 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:28:27.07942887 +0000 UTC Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.633835 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/2.log" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.635824 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.636450 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.646465 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5a296ee-a904-4283-8849-65abb16717b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a880cbfe10b56f61980a747eb95556122918942155643bad4a99f42d27a717f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d9svq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-npswg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.654974 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gd5rs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jt4wg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.664407 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73bd5ad1-5240-4ff4-873a-5d68a723288c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://717784a4b23f7582925c2009757521d85edfb37b0f177714656211cc909eec2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd474662f340758a0ec49a33ed5d4b78b595a3eed41c679d6bfd6965d123e224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd474662f340758a0ec49a33ed5d4b78b595a3eed41c679d6bfd6965d123e224\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.686379 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01bd63f3-a07a-499e-ac40-34806ebd86d6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8501607c7f2c1ee23ba80bf298c0e4ac524c8da77912a785b5b84040b928a6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://209a99db9e5a361e011122b6a953bb6cc89c5e981b30484d638bc62a10e5c073\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de127ef4899fe63c1d9a53aba243f1119798ed3fca56b8cd65ed759970078569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77155c8f4a1d07b20cfea9692e082599775d5ca691d6cc3819af734d996a9696\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7636476296aa417d23884bf8cf2e6c585091aa06aeb3662ae144fad7c6ecea7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46c92112f26ac3828c28d1ead129a3792f3cdbde2482e99f14a8e289d6da7083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f704bf320983d6b578b3b997529ca5b2037212d42888a3514a74dacbdc3fc61f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f6bd5b5129217fb3325fadb31c14d4a2695c2bc139d26acc4154307014fb87\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.698792 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed645ec9-2788-4c88-ac43-d030c18eb2a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"message\\\":\\\"W0223 13:08:17.222781 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0223 13:08:17.223151 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771852097 cert, and key in /tmp/serving-cert-1312358206/serving-signer.crt, /tmp/serving-cert-1312358206/serving-signer.key\\\\nI0223 13:08:17.478184 1 observer_polling.go:159] Starting file observer\\\\nW0223 13:08:17.493601 1 builder.go:272] unable to get owner reference (falling back to namespace): Unauthorized\\\\nI0223 13:08:17.493902 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 13:08:17.495176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1312358206/tls.crt::/tmp/serving-cert-1312358206/tls.key\\\\\\\"\\\\nF0223 13:08:17.852944 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:08:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.711004 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.721421 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.733180 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.744233 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.759449 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.771882 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23e3112452e76d2708be5f07b2c788533677d8137785411dba75d1a469195d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:58Z\\\",\\\"message\\\":\\\"2026-02-23T13:09:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_578bea46-36bf-4a1b-9081-5419fa07d0fd\\\\n2026-02-23T13:09:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_578bea46-36bf-4a1b-9081-5419fa07d0fd to /host/opt/cni/bin/\\\\n2026-02-23T13:09:13Z [verbose] multus-daemon started\\\\n2026-02-23T13:09:13Z [verbose] Readiness Indicator file check\\\\n2026-02-23T13:09:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.783811 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.800424 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:36Z\\\",\\\"message\\\":\\\" 6935 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893532 6935 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893692 6935 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894068 6935 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.894083 6935 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894172 6935 factory.go:656] Stopping watch factory\\\\nI0223 13:09:36.914185 6935 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0223 13:09:36.914224 6935 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0223 13:09:36.914292 6935 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:36.914315 6935 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0223 13:09:36.914453 6935 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.812034 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1f5cef8-3562-422d-99bb-28534ec3493a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bca4a5c072768706140bbe965a5b2fdcfaf1e4b06f2f9043f1a08efe2717fe0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:07:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 13:07:28.208218 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 13:07:28.210715 1 observer_polling.go:159] Starting file observer\\\\nI0223 13:07:28.326923 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 13:07:28.345176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 13:07:52.054537 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 13:07:52.054620 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e5509ea096a7c151dfda3739dbbbe80101948468b414e6651996230399b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b066313052e450595bc08b10fa6316bfd3dd51d9c531f6c771a3a5ac138d785b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.822656 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"330d0749-a2d2-43c8-91c5-4defa068ac6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d34ef95174e2912058b7d4a786eb1dcbee4f58993800508fde8ff98e692869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efba4302294164dc5afd345375abda413072c51da085246a3a27c4e727a2c0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e437301c1ee7791c731beb8f4213ee36b7eb4e71d9141b1e618c740f554202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.837699 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.848827 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.860709 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.870440 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.968085 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:05 crc kubenswrapper[4851]: E0223 13:10:05.968224 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.982796 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7cvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d14644c4-9d6f-4a06-bc4a-85795d4be4cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:10:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23e3112452e76d2708be5f07b2c788533677d8137785411dba75d1a469195d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:58Z\\\",\\\"message\\\":\\\"2026-02-23T13:09:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_578bea46-36bf-4a1b-9081-5419fa07d0fd\\\\n2026-02-23T13:09:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_578bea46-36bf-4a1b-9081-5419fa07d0fd to /host/opt/cni/bin/\\\\n2026-02-23T13:09:13Z [verbose] multus-daemon started\\\\n2026-02-23T13:09:13Z [verbose] Readiness Indicator file check\\\\n2026-02-23T13:09:58Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-shdj9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7cvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:05 crc kubenswrapper[4851]: I0223 13:10:05.993465 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cd8049b-7b79-49cb-9471-811e7651e400\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2adbf4b764f84b9fad1911a8fbf4de73ac11ba10905efeb1514f8e68bd3a6a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e433703dd0ce869c8a07590de141058945b28eec91be7ff2086fbcd8ff6fedbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsj29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-bscl2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:05Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.006367 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.016344 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c58ec087f2d5fbf3a799f87026f332804cfe2a0766101ffdef06e478469e643b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.017297 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:22:53.496530736 +0000 UTC Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.031346 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8sz99" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd228f9-317e-43a7-a9f8-473d69d93204\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaf2329e1a68ce71dbf45e46b01c01ed8ab2afde42659e278e641128eff102b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97683e22330ffa96ffc4149d740404e2f5a0a9f0c70b72b4c2114c9651b8f82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ace504a4a1ed9ca27ddb29f156e221ede54785a3674a2a81d92a88b21415e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41963f87ca4a615e29b3489da673d3a58545ab6559aeeeaa8f0859a24fa52610\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d69f746599e9440cf3e937a172ee2cc91f8d99277cc6b9a7b1da1d5422f04aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b576c69ae9d78c90eb02c3d4387b1641166c535f4828d75e12b72c649a337011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c507f83d9d08f2c83062ab77c8d0d66430cc87a1683348440f75f75f3ed590b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlfxz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8sz99\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.043192 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.056214 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-snjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"584d5dd7-e0f3-4695-bf51-22e1b643db23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9058cf80dbd12b8cddd765b0cbfc347e1ad2e32473b89e379932728867782ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpxbk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-snjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: E0223 13:10:06.067096 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.076405 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:11Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T13:09:36Z\\\",\\\"message\\\":\\\" 6935 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893532 6935 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.893692 6935 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894068 6935 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 13:09:36.894083 6935 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 13:09:36.894172 6935 factory.go:656] Stopping watch factory\\\\nI0223 13:09:36.914185 6935 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0223 13:09:36.914224 6935 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0223 13:09:36.914292 6935 ovnkube.go:599] Stopped ovnkube\\\\nI0223 13:09:36.914315 6935 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0223 13:09:36.914453 6935 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:10:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:09:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:09:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzppz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:11Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n9df6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.089146 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1f5cef8-3562-422d-99bb-28534ec3493a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bca4a5c072768706140bbe965a5b2fdcfaf1e4b06f2f9043f1a08efe2717fe0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afedae41697753018a8ebc39fb53c0a05f4cca642bcaa37f974d458702156cf8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T13:07:52Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 13:07:28.208218 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 13:07:28.210715 1 observer_polling.go:159] Starting file observer\\\\nI0223 13:07:28.326923 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 13:07:28.345176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 13:07:52.054537 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 13:07:52.054620 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:07:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0e5509ea096a7c151dfda3739dbbbe80101948468b414e6651996230399b3b9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b066313052e450595bc08b10fa6316bfd3dd51d9c531f6c771a3a5ac138d785b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.098715 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"330d0749-a2d2-43c8-91c5-4defa068ac6b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://73d34ef95174e2912058b7d4a786eb1dcbee4f58993800508fde8ff98e692869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://efba4302294164dc5afd345375abda413072c51da085246a3a27c4e727a2c0b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68e437301c1ee7791c731beb8f4213ee36b7eb4e71d9141b1e618c740f554202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bf6219030a2440a5a570719485a520d6a5505653aeeda3ff9e37313c58136e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T13:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T13:07:27Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:07:26Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.110050 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7c30a93ee90616ea67b9412b5fa6f75d9d3340bad5f4973834a7f94b3225c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://330ad3e447ab410cf9c1fd2d212059c2cb994ed4d00bf2b122b58b3829456ee9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.120269 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:40Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T13:08:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08dd4c1ebaddf9c7c39e93ed4bdbb84163bb1822951aafe40698e75e9713188a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:08:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.129221 4851 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-chvdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e47bced-c5be-4d4f-9dee-0a992534dea2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T13:09:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7e6004a25753af58c3c009bd08a939f2da9b12992f17f56bc8e035526ad0e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T13:09:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6h64b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T13:09:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-chvdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T13:10:06Z is after 2025-08-24T17:21:41Z" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.163931 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podStartSLOduration=104.163907882 podStartE2EDuration="1m44.163907882s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.163426289 +0000 UTC m=+160.845130027" watchObservedRunningTime="2026-02-23 13:10:06.163907882 +0000 UTC m=+160.845611600" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.221063 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.221043505 podStartE2EDuration="6.221043505s" podCreationTimestamp="2026-02-23 13:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.189725068 +0000 UTC m=+160.871428756" watchObservedRunningTime="2026-02-23 13:10:06.221043505 +0000 UTC m=+160.902747183" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.236397 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=67.236380484 podStartE2EDuration="1m7.236380484s" podCreationTimestamp="2026-02-23 13:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.221391704 +0000 UTC m=+160.903095392" watchObservedRunningTime="2026-02-23 13:10:06.236380484 +0000 UTC m=+160.918084162" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.640782 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/3.log" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.642070 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/2.log" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.645277 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" exitCode=1 Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.645381 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.645463 4851 scope.go:117] "RemoveContainer" containerID="1769ee25de28550261304c99e5bcfadfa00c39a4082b1ed619ee3ec08fdd28fd" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.646157 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:10:06 crc kubenswrapper[4851]: E0223 13:10:06.646359 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.666265 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.666248309 podStartE2EDuration="1m20.666248309s" podCreationTimestamp="2026-02-23 13:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.237038432 +0000 UTC m=+160.918742110" watchObservedRunningTime="2026-02-23 13:10:06.666248309 +0000 UTC m=+161.347951987" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.678124 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-chvdk" podStartSLOduration=105.678069582 podStartE2EDuration="1m45.678069582s" podCreationTimestamp="2026-02-23 13:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.677917518 +0000 UTC m=+161.359621216" watchObservedRunningTime="2026-02-23 13:10:06.678069582 +0000 UTC m=+161.359773300" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.721511 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8sz99" podStartSLOduration=104.72149822 podStartE2EDuration="1m44.72149822s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.721432348 +0000 UTC m=+161.403136036" watchObservedRunningTime="2026-02-23 13:10:06.72149822 +0000 UTC m=+161.403201898" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.737150 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-t7cvl" podStartSLOduration=104.737133727 podStartE2EDuration="1m44.737133727s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.736721256 +0000 UTC m=+161.418424944" watchObservedRunningTime="2026-02-23 13:10:06.737133727 +0000 UTC m=+161.418837395" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.755848 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-bscl2" podStartSLOduration=104.755831219 podStartE2EDuration="1m44.755831219s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.754945354 +0000 UTC m=+161.436649052" watchObservedRunningTime="2026-02-23 13:10:06.755831219 +0000 UTC m=+161.437534897" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.771469 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=28.771449506 podStartE2EDuration="28.771449506s" podCreationTimestamp="2026-02-23 13:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.771415575 +0000 UTC m=+161.453119273" watchObservedRunningTime="2026-02-23 13:10:06.771449506 +0000 UTC m=+161.453153184" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.795662 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=27.795643837 podStartE2EDuration="27.795643837s" podCreationTimestamp="2026-02-23 13:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.783139605 +0000 UTC m=+161.464843303" watchObservedRunningTime="2026-02-23 13:10:06.795643837 +0000 UTC m=+161.477347515" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.821649 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-snjvm" podStartSLOduration=105.821633168 podStartE2EDuration="1m45.821633168s" podCreationTimestamp="2026-02-23 13:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:06.820918988 +0000 UTC m=+161.502622666" watchObservedRunningTime="2026-02-23 13:10:06.821633168 +0000 UTC m=+161.503336846" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.968508 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.968508 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:06 crc kubenswrapper[4851]: I0223 13:10:06.968511 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:06 crc kubenswrapper[4851]: E0223 13:10:06.968632 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:06 crc kubenswrapper[4851]: E0223 13:10:06.968727 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:06 crc kubenswrapper[4851]: E0223 13:10:06.968780 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:07 crc kubenswrapper[4851]: I0223 13:10:07.017601 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:18:09.268606852 +0000 UTC Feb 23 13:10:07 crc kubenswrapper[4851]: I0223 13:10:07.651392 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/3.log" Feb 23 13:10:07 crc kubenswrapper[4851]: I0223 13:10:07.655512 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:10:07 crc kubenswrapper[4851]: E0223 13:10:07.655657 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" Feb 23 13:10:07 crc kubenswrapper[4851]: I0223 13:10:07.968122 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:07 crc kubenswrapper[4851]: E0223 13:10:07.968398 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.018544 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:21:50.821028722 +0000 UTC Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.091795 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.091838 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.091852 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.091869 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.091881 4851 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T13:10:08Z","lastTransitionTime":"2026-02-23T13:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.154829 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f"] Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.155473 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.158540 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.158738 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.159591 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.161143 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.245872 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e8de9980-abab-4248-a8b6-bb8b293922b8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.245924 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8de9980-abab-4248-a8b6-bb8b293922b8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.246008 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e8de9980-abab-4248-a8b6-bb8b293922b8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.246038 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8de9980-abab-4248-a8b6-bb8b293922b8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.246061 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e8de9980-abab-4248-a8b6-bb8b293922b8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.347050 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e8de9980-abab-4248-a8b6-bb8b293922b8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.347109 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8de9980-abab-4248-a8b6-bb8b293922b8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.347221 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e8de9980-abab-4248-a8b6-bb8b293922b8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.347260 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8de9980-abab-4248-a8b6-bb8b293922b8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.347294 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e8de9980-abab-4248-a8b6-bb8b293922b8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.347426 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e8de9980-abab-4248-a8b6-bb8b293922b8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.347626 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e8de9980-abab-4248-a8b6-bb8b293922b8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.348057 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e8de9980-abab-4248-a8b6-bb8b293922b8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.358083 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8de9980-abab-4248-a8b6-bb8b293922b8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.378554 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e8de9980-abab-4248-a8b6-bb8b293922b8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wk48f\" (UID: \"e8de9980-abab-4248-a8b6-bb8b293922b8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.469875 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" Feb 23 13:10:08 crc kubenswrapper[4851]: W0223 13:10:08.486493 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8de9980_abab_4248_a8b6_bb8b293922b8.slice/crio-526144f1d87ebfbc3622dffbbf481a4df4370bfbc684d83b08811c7510bbfdcb WatchSource:0}: Error finding container 526144f1d87ebfbc3622dffbbf481a4df4370bfbc684d83b08811c7510bbfdcb: Status 404 returned error can't find the container with id 526144f1d87ebfbc3622dffbbf481a4df4370bfbc684d83b08811c7510bbfdcb Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.660520 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" event={"ID":"e8de9980-abab-4248-a8b6-bb8b293922b8","Type":"ContainerStarted","Data":"9f53050a215b0e038b5f85443d134fc48fecb420451380dae91caf5e87dc2146"} Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.660583 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" event={"ID":"e8de9980-abab-4248-a8b6-bb8b293922b8","Type":"ContainerStarted","Data":"526144f1d87ebfbc3622dffbbf481a4df4370bfbc684d83b08811c7510bbfdcb"} Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.968386 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.968422 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:08 crc kubenswrapper[4851]: I0223 13:10:08.968386 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:08 crc kubenswrapper[4851]: E0223 13:10:08.968501 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:08 crc kubenswrapper[4851]: E0223 13:10:08.968629 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:08 crc kubenswrapper[4851]: E0223 13:10:08.968687 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:09 crc kubenswrapper[4851]: I0223 13:10:09.019496 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:46:39.357219223 +0000 UTC Feb 23 13:10:09 crc kubenswrapper[4851]: I0223 13:10:09.019558 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 23 13:10:09 crc kubenswrapper[4851]: I0223 13:10:09.028122 4851 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 13:10:09 crc kubenswrapper[4851]: I0223 13:10:09.968304 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:09 crc kubenswrapper[4851]: E0223 13:10:09.968545 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:10 crc kubenswrapper[4851]: I0223 13:10:10.967944 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:10 crc kubenswrapper[4851]: I0223 13:10:10.967994 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:10 crc kubenswrapper[4851]: E0223 13:10:10.968071 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:10 crc kubenswrapper[4851]: I0223 13:10:10.968030 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:10 crc kubenswrapper[4851]: E0223 13:10:10.968231 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:10 crc kubenswrapper[4851]: E0223 13:10:10.968360 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:11 crc kubenswrapper[4851]: E0223 13:10:11.069016 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:11 crc kubenswrapper[4851]: I0223 13:10:11.968234 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:11 crc kubenswrapper[4851]: E0223 13:10:11.968482 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:12 crc kubenswrapper[4851]: I0223 13:10:12.967724 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:12 crc kubenswrapper[4851]: I0223 13:10:12.967737 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:12 crc kubenswrapper[4851]: E0223 13:10:12.967891 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:12 crc kubenswrapper[4851]: E0223 13:10:12.967928 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:12 crc kubenswrapper[4851]: I0223 13:10:12.968231 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:12 crc kubenswrapper[4851]: E0223 13:10:12.968465 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:13 crc kubenswrapper[4851]: I0223 13:10:13.968306 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:13 crc kubenswrapper[4851]: E0223 13:10:13.968518 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:14 crc kubenswrapper[4851]: I0223 13:10:14.967687 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:14 crc kubenswrapper[4851]: I0223 13:10:14.967687 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:14 crc kubenswrapper[4851]: E0223 13:10:14.967828 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:14 crc kubenswrapper[4851]: E0223 13:10:14.967988 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:14 crc kubenswrapper[4851]: I0223 13:10:14.967713 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:14 crc kubenswrapper[4851]: E0223 13:10:14.968171 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:15 crc kubenswrapper[4851]: I0223 13:10:15.968773 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:15 crc kubenswrapper[4851]: E0223 13:10:15.970208 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:16 crc kubenswrapper[4851]: E0223 13:10:16.069832 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:16 crc kubenswrapper[4851]: I0223 13:10:16.968580 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:16 crc kubenswrapper[4851]: I0223 13:10:16.968611 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:16 crc kubenswrapper[4851]: I0223 13:10:16.968611 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:16 crc kubenswrapper[4851]: E0223 13:10:16.968736 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:16 crc kubenswrapper[4851]: E0223 13:10:16.968813 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:16 crc kubenswrapper[4851]: E0223 13:10:16.968913 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:17 crc kubenswrapper[4851]: I0223 13:10:17.968691 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:17 crc kubenswrapper[4851]: E0223 13:10:17.968862 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:18 crc kubenswrapper[4851]: I0223 13:10:18.968489 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:18 crc kubenswrapper[4851]: I0223 13:10:18.968499 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:18 crc kubenswrapper[4851]: E0223 13:10:18.968669 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:18 crc kubenswrapper[4851]: I0223 13:10:18.968673 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:18 crc kubenswrapper[4851]: E0223 13:10:18.969067 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:18 crc kubenswrapper[4851]: E0223 13:10:18.969198 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:19 crc kubenswrapper[4851]: I0223 13:10:19.968462 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:19 crc kubenswrapper[4851]: E0223 13:10:19.968809 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:20 crc kubenswrapper[4851]: I0223 13:10:20.968449 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:20 crc kubenswrapper[4851]: I0223 13:10:20.968514 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:20 crc kubenswrapper[4851]: E0223 13:10:20.968666 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:20 crc kubenswrapper[4851]: I0223 13:10:20.968752 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:20 crc kubenswrapper[4851]: E0223 13:10:20.969285 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:20 crc kubenswrapper[4851]: E0223 13:10:20.969120 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:20 crc kubenswrapper[4851]: I0223 13:10:20.970519 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:10:20 crc kubenswrapper[4851]: E0223 13:10:20.970781 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" Feb 23 13:10:21 crc kubenswrapper[4851]: E0223 13:10:21.071602 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:21 crc kubenswrapper[4851]: I0223 13:10:21.968609 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:21 crc kubenswrapper[4851]: E0223 13:10:21.968741 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:22 crc kubenswrapper[4851]: I0223 13:10:22.968323 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:22 crc kubenswrapper[4851]: I0223 13:10:22.968453 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:22 crc kubenswrapper[4851]: I0223 13:10:22.968460 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:22 crc kubenswrapper[4851]: E0223 13:10:22.968629 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:22 crc kubenswrapper[4851]: E0223 13:10:22.968839 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:22 crc kubenswrapper[4851]: E0223 13:10:22.968981 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:23 crc kubenswrapper[4851]: I0223 13:10:23.968028 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:23 crc kubenswrapper[4851]: E0223 13:10:23.968234 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:24 crc kubenswrapper[4851]: I0223 13:10:24.967954 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:24 crc kubenswrapper[4851]: I0223 13:10:24.968045 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:24 crc kubenswrapper[4851]: E0223 13:10:24.968122 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:24 crc kubenswrapper[4851]: I0223 13:10:24.968070 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:24 crc kubenswrapper[4851]: E0223 13:10:24.968279 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:24 crc kubenswrapper[4851]: E0223 13:10:24.968396 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:25 crc kubenswrapper[4851]: I0223 13:10:25.968288 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:25 crc kubenswrapper[4851]: E0223 13:10:25.969905 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:26 crc kubenswrapper[4851]: E0223 13:10:26.074899 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:26 crc kubenswrapper[4851]: I0223 13:10:26.968206 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:26 crc kubenswrapper[4851]: I0223 13:10:26.968289 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:26 crc kubenswrapper[4851]: I0223 13:10:26.968257 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:26 crc kubenswrapper[4851]: E0223 13:10:26.968516 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:26 crc kubenswrapper[4851]: E0223 13:10:26.968556 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:26 crc kubenswrapper[4851]: E0223 13:10:26.968696 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:27 crc kubenswrapper[4851]: I0223 13:10:27.968085 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:27 crc kubenswrapper[4851]: E0223 13:10:27.968243 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:28 crc kubenswrapper[4851]: I0223 13:10:28.451679 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:28 crc kubenswrapper[4851]: E0223 13:10:28.451834 4851 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:10:28 crc kubenswrapper[4851]: E0223 13:10:28.451896 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs podName:b88d393f-3f9d-4c95-b41b-10e998d5ca0f nodeName:}" failed. No retries permitted until 2026-02-23 13:11:32.451879862 +0000 UTC m=+247.133583550 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs") pod "network-metrics-daemon-jt4wg" (UID: "b88d393f-3f9d-4c95-b41b-10e998d5ca0f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 13:10:28 crc kubenswrapper[4851]: I0223 13:10:28.968380 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:28 crc kubenswrapper[4851]: I0223 13:10:28.968438 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:28 crc kubenswrapper[4851]: E0223 13:10:28.968490 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:28 crc kubenswrapper[4851]: I0223 13:10:28.968447 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:28 crc kubenswrapper[4851]: E0223 13:10:28.968614 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:28 crc kubenswrapper[4851]: E0223 13:10:28.968708 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:29 crc kubenswrapper[4851]: I0223 13:10:29.968720 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:29 crc kubenswrapper[4851]: E0223 13:10:29.968921 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:30 crc kubenswrapper[4851]: I0223 13:10:30.968656 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:30 crc kubenswrapper[4851]: I0223 13:10:30.968773 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:30 crc kubenswrapper[4851]: E0223 13:10:30.968791 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:30 crc kubenswrapper[4851]: I0223 13:10:30.968680 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:30 crc kubenswrapper[4851]: E0223 13:10:30.968955 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:30 crc kubenswrapper[4851]: E0223 13:10:30.969068 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:31 crc kubenswrapper[4851]: E0223 13:10:31.075712 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:31 crc kubenswrapper[4851]: I0223 13:10:31.969138 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:31 crc kubenswrapper[4851]: I0223 13:10:31.969195 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:10:31 crc kubenswrapper[4851]: E0223 13:10:31.969356 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:31 crc kubenswrapper[4851]: E0223 13:10:31.969372 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n9df6_openshift-ovn-kubernetes(4c1929e0-6878-4572-b6d1-3a6dd8e2c291)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" Feb 23 13:10:32 crc kubenswrapper[4851]: I0223 13:10:32.968002 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:32 crc kubenswrapper[4851]: I0223 13:10:32.967999 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:32 crc kubenswrapper[4851]: I0223 13:10:32.968067 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:32 crc kubenswrapper[4851]: E0223 13:10:32.968388 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:32 crc kubenswrapper[4851]: E0223 13:10:32.968519 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:32 crc kubenswrapper[4851]: E0223 13:10:32.968732 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:33 crc kubenswrapper[4851]: I0223 13:10:33.968252 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:33 crc kubenswrapper[4851]: E0223 13:10:33.968488 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:34 crc kubenswrapper[4851]: I0223 13:10:34.967926 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:34 crc kubenswrapper[4851]: I0223 13:10:34.967998 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:34 crc kubenswrapper[4851]: I0223 13:10:34.968040 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:34 crc kubenswrapper[4851]: E0223 13:10:34.968097 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:34 crc kubenswrapper[4851]: E0223 13:10:34.968224 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:34 crc kubenswrapper[4851]: E0223 13:10:34.968404 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:35 crc kubenswrapper[4851]: I0223 13:10:35.968121 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:35 crc kubenswrapper[4851]: E0223 13:10:35.969110 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:36 crc kubenswrapper[4851]: E0223 13:10:36.076879 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:36 crc kubenswrapper[4851]: I0223 13:10:36.968125 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:36 crc kubenswrapper[4851]: I0223 13:10:36.968230 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:36 crc kubenswrapper[4851]: E0223 13:10:36.968713 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:36 crc kubenswrapper[4851]: E0223 13:10:36.968607 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:36 crc kubenswrapper[4851]: I0223 13:10:36.968238 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:36 crc kubenswrapper[4851]: E0223 13:10:36.968794 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:37 crc kubenswrapper[4851]: I0223 13:10:37.968670 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:37 crc kubenswrapper[4851]: E0223 13:10:37.968836 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:38 crc kubenswrapper[4851]: I0223 13:10:38.967888 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:38 crc kubenswrapper[4851]: I0223 13:10:38.967885 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:38 crc kubenswrapper[4851]: I0223 13:10:38.967978 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:38 crc kubenswrapper[4851]: E0223 13:10:38.968215 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:38 crc kubenswrapper[4851]: E0223 13:10:38.968310 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:38 crc kubenswrapper[4851]: E0223 13:10:38.968405 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:39 crc kubenswrapper[4851]: I0223 13:10:39.969374 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:39 crc kubenswrapper[4851]: E0223 13:10:39.969549 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:40 crc kubenswrapper[4851]: I0223 13:10:40.968426 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:40 crc kubenswrapper[4851]: I0223 13:10:40.968474 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:40 crc kubenswrapper[4851]: E0223 13:10:40.968551 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:40 crc kubenswrapper[4851]: I0223 13:10:40.968455 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:40 crc kubenswrapper[4851]: E0223 13:10:40.968653 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:40 crc kubenswrapper[4851]: E0223 13:10:40.968715 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:41 crc kubenswrapper[4851]: E0223 13:10:41.078544 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:41 crc kubenswrapper[4851]: I0223 13:10:41.968292 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:41 crc kubenswrapper[4851]: E0223 13:10:41.968446 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:42 crc kubenswrapper[4851]: I0223 13:10:42.968158 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:42 crc kubenswrapper[4851]: I0223 13:10:42.968241 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:42 crc kubenswrapper[4851]: I0223 13:10:42.968189 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:42 crc kubenswrapper[4851]: E0223 13:10:42.968347 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:42 crc kubenswrapper[4851]: E0223 13:10:42.968387 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:42 crc kubenswrapper[4851]: E0223 13:10:42.968433 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:43 crc kubenswrapper[4851]: I0223 13:10:43.968152 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:43 crc kubenswrapper[4851]: E0223 13:10:43.968386 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:44 crc kubenswrapper[4851]: I0223 13:10:44.968243 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:44 crc kubenswrapper[4851]: E0223 13:10:44.968376 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:44 crc kubenswrapper[4851]: I0223 13:10:44.968546 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:44 crc kubenswrapper[4851]: E0223 13:10:44.968635 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:44 crc kubenswrapper[4851]: I0223 13:10:44.968541 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:44 crc kubenswrapper[4851]: E0223 13:10:44.968804 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:45 crc kubenswrapper[4851]: I0223 13:10:45.777215 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/1.log" Feb 23 13:10:45 crc kubenswrapper[4851]: I0223 13:10:45.777983 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/0.log" Feb 23 13:10:45 crc kubenswrapper[4851]: I0223 13:10:45.778035 4851 generic.go:334] "Generic (PLEG): container finished" podID="d14644c4-9d6f-4a06-bc4a-85795d4be4cd" containerID="f23e3112452e76d2708be5f07b2c788533677d8137785411dba75d1a469195d3" exitCode=1 Feb 23 13:10:45 crc kubenswrapper[4851]: I0223 13:10:45.778063 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7cvl" event={"ID":"d14644c4-9d6f-4a06-bc4a-85795d4be4cd","Type":"ContainerDied","Data":"f23e3112452e76d2708be5f07b2c788533677d8137785411dba75d1a469195d3"} Feb 23 13:10:45 crc kubenswrapper[4851]: I0223 13:10:45.778092 4851 scope.go:117] "RemoveContainer" containerID="61d26b66afee6ed854b0cbd79bfc86f0ddd8b2cd879c7a2fd76173a44ab98cbb" Feb 23 13:10:45 crc kubenswrapper[4851]: I0223 13:10:45.778496 4851 scope.go:117] "RemoveContainer" containerID="f23e3112452e76d2708be5f07b2c788533677d8137785411dba75d1a469195d3" Feb 23 13:10:45 crc kubenswrapper[4851]: E0223 13:10:45.778731 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-t7cvl_openshift-multus(d14644c4-9d6f-4a06-bc4a-85795d4be4cd)\"" pod="openshift-multus/multus-t7cvl" podUID="d14644c4-9d6f-4a06-bc4a-85795d4be4cd" Feb 23 13:10:45 crc kubenswrapper[4851]: I0223 13:10:45.796583 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wk48f" podStartSLOduration=143.796567447 podStartE2EDuration="2m23.796567447s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:08.679194783 +0000 UTC m=+163.360898481" watchObservedRunningTime="2026-02-23 13:10:45.796567447 +0000 UTC m=+200.478271125" Feb 23 13:10:45 crc kubenswrapper[4851]: I0223 13:10:45.968267 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:45 crc kubenswrapper[4851]: E0223 13:10:45.969312 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:45 crc kubenswrapper[4851]: I0223 13:10:45.969545 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.079861 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.681049 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jt4wg"] Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.782675 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/1.log" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.784648 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/3.log" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.786826 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerStarted","Data":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.786852 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.786945 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.787355 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.812066 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podStartSLOduration=144.812045542 podStartE2EDuration="2m24.812045542s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:10:46.809565694 +0000 UTC m=+201.491269392" watchObservedRunningTime="2026-02-23 13:10:46.812045542 +0000 UTC m=+201.493749240" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.883644 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.883745 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.883850 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.883889 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.883905 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:12:48.883873686 +0000 UTC m=+323.565577364 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.883965 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:12:48.883954768 +0000 UTC m=+323.565658566 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.883985 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.884029 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:12:48.88401591 +0000 UTC m=+323.565719588 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.967988 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.968033 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.968086 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.968104 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.968160 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.968219 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.984715 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:46 crc kubenswrapper[4851]: I0223 13:10:46.984754 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.984889 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.984904 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.984915 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.984956 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:12:48.984944269 +0000 UTC m=+323.666647947 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.984974 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.985021 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.985036 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:10:46 crc kubenswrapper[4851]: E0223 13:10:46.985101 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:12:48.985079853 +0000 UTC m=+323.666783631 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 13:10:48 crc kubenswrapper[4851]: I0223 13:10:48.967806 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:48 crc kubenswrapper[4851]: I0223 13:10:48.967806 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:48 crc kubenswrapper[4851]: E0223 13:10:48.968053 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:48 crc kubenswrapper[4851]: E0223 13:10:48.968086 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:48 crc kubenswrapper[4851]: I0223 13:10:48.967838 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:48 crc kubenswrapper[4851]: E0223 13:10:48.968206 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:48 crc kubenswrapper[4851]: I0223 13:10:48.967806 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:48 crc kubenswrapper[4851]: E0223 13:10:48.968351 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:50 crc kubenswrapper[4851]: I0223 13:10:50.968792 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:50 crc kubenswrapper[4851]: I0223 13:10:50.968821 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:50 crc kubenswrapper[4851]: E0223 13:10:50.969618 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:50 crc kubenswrapper[4851]: I0223 13:10:50.968901 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:50 crc kubenswrapper[4851]: I0223 13:10:50.968821 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:50 crc kubenswrapper[4851]: E0223 13:10:50.969828 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:50 crc kubenswrapper[4851]: E0223 13:10:50.969911 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:50 crc kubenswrapper[4851]: E0223 13:10:50.970045 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:51 crc kubenswrapper[4851]: E0223 13:10:51.082723 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:52 crc kubenswrapper[4851]: I0223 13:10:52.968185 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:52 crc kubenswrapper[4851]: I0223 13:10:52.968231 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:52 crc kubenswrapper[4851]: I0223 13:10:52.968244 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:52 crc kubenswrapper[4851]: I0223 13:10:52.968437 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:52 crc kubenswrapper[4851]: E0223 13:10:52.968598 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:52 crc kubenswrapper[4851]: E0223 13:10:52.968891 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:52 crc kubenswrapper[4851]: E0223 13:10:52.969003 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:52 crc kubenswrapper[4851]: E0223 13:10:52.969150 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:54 crc kubenswrapper[4851]: I0223 13:10:54.967786 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:54 crc kubenswrapper[4851]: I0223 13:10:54.967846 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:54 crc kubenswrapper[4851]: I0223 13:10:54.967855 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:54 crc kubenswrapper[4851]: I0223 13:10:54.967888 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:54 crc kubenswrapper[4851]: E0223 13:10:54.968017 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:54 crc kubenswrapper[4851]: E0223 13:10:54.968164 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:54 crc kubenswrapper[4851]: E0223 13:10:54.968238 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:54 crc kubenswrapper[4851]: E0223 13:10:54.968459 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:56 crc kubenswrapper[4851]: E0223 13:10:56.083385 4851 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 13:10:56 crc kubenswrapper[4851]: I0223 13:10:56.968822 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:56 crc kubenswrapper[4851]: I0223 13:10:56.968895 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:56 crc kubenswrapper[4851]: I0223 13:10:56.968953 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:56 crc kubenswrapper[4851]: I0223 13:10:56.969615 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:56 crc kubenswrapper[4851]: E0223 13:10:56.969741 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:10:56 crc kubenswrapper[4851]: E0223 13:10:56.969923 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:56 crc kubenswrapper[4851]: E0223 13:10:56.970026 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:56 crc kubenswrapper[4851]: E0223 13:10:56.970189 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:56 crc kubenswrapper[4851]: I0223 13:10:56.970451 4851 scope.go:117] "RemoveContainer" containerID="f23e3112452e76d2708be5f07b2c788533677d8137785411dba75d1a469195d3" Feb 23 13:10:57 crc kubenswrapper[4851]: I0223 13:10:57.826809 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/1.log" Feb 23 13:10:57 crc kubenswrapper[4851]: I0223 13:10:57.826872 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7cvl" event={"ID":"d14644c4-9d6f-4a06-bc4a-85795d4be4cd","Type":"ContainerStarted","Data":"0026076e95f0c7e84d940cc73c6f26c87c1b130819fbb330b48e9a5d5b82c6a5"} Feb 23 13:10:58 crc kubenswrapper[4851]: I0223 13:10:58.967877 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:10:58 crc kubenswrapper[4851]: I0223 13:10:58.967889 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:10:58 crc kubenswrapper[4851]: E0223 13:10:58.968356 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:10:58 crc kubenswrapper[4851]: I0223 13:10:58.967929 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:10:58 crc kubenswrapper[4851]: I0223 13:10:58.967889 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:10:58 crc kubenswrapper[4851]: E0223 13:10:58.968644 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:10:58 crc kubenswrapper[4851]: E0223 13:10:58.968894 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:10:58 crc kubenswrapper[4851]: E0223 13:10:58.969106 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:11:00 crc kubenswrapper[4851]: I0223 13:11:00.968702 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:11:00 crc kubenswrapper[4851]: I0223 13:11:00.968765 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:11:00 crc kubenswrapper[4851]: I0223 13:11:00.968776 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:11:00 crc kubenswrapper[4851]: I0223 13:11:00.968738 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:11:00 crc kubenswrapper[4851]: E0223 13:11:00.968890 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jt4wg" podUID="b88d393f-3f9d-4c95-b41b-10e998d5ca0f" Feb 23 13:11:00 crc kubenswrapper[4851]: E0223 13:11:00.968967 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:11:00 crc kubenswrapper[4851]: E0223 13:11:00.968997 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:11:00 crc kubenswrapper[4851]: E0223 13:11:00.969041 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.968588 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.968638 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.968600 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.968599 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.971735 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.972379 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.972410 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.972871 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.972900 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:11:02 crc kubenswrapper[4851]: I0223 13:11:02.973537 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.862074 4851 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.933530 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f2568"] Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.934563 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.934679 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-slxzr"] Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.935284 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-slxzr" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.935731 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dw6fk"] Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.936388 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.939869 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt"] Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.940652 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf"] Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.941172 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.941817 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.942482 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh"] Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.943845 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.953932 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.956555 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.957037 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.957321 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.962247 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.962468 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.962501 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.967203 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.968475 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.978917 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.979905 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 13:11:09 crc kubenswrapper[4851]: I0223 13:11:09.980160 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.005985 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.006401 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8r5vd"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.006590 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.006605 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.006714 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.006791 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.006925 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007047 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007122 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007280 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007531 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007551 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007608 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007645 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007674 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007728 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007797 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.007811 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.008185 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.009298 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.009781 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.010246 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.020793 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9hs4"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.021129 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.021210 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.021217 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.021838 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fcq25"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.022510 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.022605 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x8scz"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.022990 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.023450 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.024697 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.024859 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.025691 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bgglr"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.025792 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.027042 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.040371 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.041042 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.041385 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tbl98"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.041660 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.042630 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.042805 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zzqtr"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.042718 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.043184 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.048254 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-966l7"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.049198 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.049213 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.049343 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gllrl"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.049441 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.049604 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.050026 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.050428 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.050562 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.050656 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.058604 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49cf75f7-0c60-4281-b114-d43db3ea4e3c-machine-approver-tls\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.058656 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49cf75f7-0c60-4281-b114-d43db3ea4e3c-config\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.058685 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64wl6\" (UniqueName: \"kubernetes.io/projected/49cf75f7-0c60-4281-b114-d43db3ea4e3c-kube-api-access-64wl6\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.058745 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49cf75f7-0c60-4281-b114-d43db3ea4e3c-auth-proxy-config\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.060481 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.060643 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.060983 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.061261 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.061567 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.061817 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.062120 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.062248 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.062460 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.062502 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.062626 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.062679 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.062778 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.062889 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.062933 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.063038 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.063066 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.063197 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.063270 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.063305 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.063522 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.063916 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.064094 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.064209 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.064294 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.064516 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.064617 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.064700 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.064782 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.064862 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.064968 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.065054 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.065140 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.065386 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.068785 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-k2qrn"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.085877 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.086738 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.086864 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.087509 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.088794 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.089418 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.089685 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.090197 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.091164 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.091416 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.093525 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.093545 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.094082 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.094535 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.096456 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.096809 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.090203 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.098018 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.111916 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.112653 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.112921 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.113035 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.113143 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.113234 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.113652 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.113818 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.114040 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.114177 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.116364 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.117148 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.117665 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.118110 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.118233 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.119691 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.119960 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.120274 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.123684 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.126587 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.127008 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.127143 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.128852 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.140955 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.141136 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.142533 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-slxzr"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.142570 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f2568"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.143571 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dw6fk"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.143590 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.143152 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.143402 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.144585 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wwn4t"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.144907 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.146025 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.146059 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.146286 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.146357 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.146562 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.156541 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.158603 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.159321 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64wl6\" (UniqueName: \"kubernetes.io/projected/49cf75f7-0c60-4281-b114-d43db3ea4e3c-kube-api-access-64wl6\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.159390 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49cf75f7-0c60-4281-b114-d43db3ea4e3c-auth-proxy-config\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.159429 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49cf75f7-0c60-4281-b114-d43db3ea4e3c-machine-approver-tls\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.159453 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49cf75f7-0c60-4281-b114-d43db3ea4e3c-config\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.160003 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49cf75f7-0c60-4281-b114-d43db3ea4e3c-config\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.160560 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49cf75f7-0c60-4281-b114-d43db3ea4e3c-auth-proxy-config\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.164489 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2mgdx"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.176553 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.177178 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49cf75f7-0c60-4281-b114-d43db3ea4e3c-machine-approver-tls\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.178013 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.178397 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.178768 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.178939 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.179121 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.181245 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mgv55"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.181368 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.182215 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.184458 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.186439 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.187602 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z8hww"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.190122 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.193689 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4fg88"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.194300 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.196028 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.196807 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.200063 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.202074 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8r5vd"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.205942 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lzbwt"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.206942 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.207494 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9hs4"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.209460 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.210884 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.212177 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.213546 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bgglr"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.214816 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.216206 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gllrl"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.217066 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.217319 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xnbsq"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.218802 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.219957 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tbl98"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.221101 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.222321 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.223298 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z8hww"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.224315 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.225587 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mgv55"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.226764 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.227867 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.229034 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zzqtr"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.230209 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.231403 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.232790 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.233851 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.234935 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fcq25"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.236419 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lzbwt"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.236712 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.237245 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.238661 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.239641 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.240865 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4fg88"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.241918 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wwn4t"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.242958 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-966l7"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.243952 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.244937 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x8scz"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.245905 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.246982 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2mgdx"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.247908 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xnbsq"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.248946 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cxc28"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.249713 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cxc28" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.250535 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zwhdl"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.252396 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.253029 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cxc28"] Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.256985 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.277079 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.297098 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.318772 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.339052 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.358090 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.378080 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.408760 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.417294 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.437486 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.472023 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.517155 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.538390 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.557956 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.561963 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jqd8\" (UniqueName: \"kubernetes.io/projected/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-kube-api-access-4jqd8\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562004 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skktx\" (UniqueName: \"kubernetes.io/projected/a095f17c-1ff0-450a-93b7-1518f99771d9-kube-api-access-skktx\") pod \"cluster-samples-operator-665b6dd947-zs7nt\" (UID: \"a095f17c-1ff0-450a-93b7-1518f99771d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562036 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06807502-b6d9-4803-93bf-d8ac8e721ef3-audit-dir\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562064 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0d549d-439b-4011-8ea1-47e808a3b715-serving-cert\") pod \"openshift-config-operator-7777fb866f-f2568\" (UID: \"1b0d549d-439b-4011-8ea1-47e808a3b715\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562090 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-etcd-service-ca\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562116 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-etcd-client\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562140 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/598d1af4-7f4c-4815-8b0c-bd364fcc191d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562161 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-service-ca\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562186 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-oauth-serving-cert\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562429 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-images\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562525 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsfpj\" (UniqueName: \"kubernetes.io/projected/06807502-b6d9-4803-93bf-d8ac8e721ef3-kube-api-access-qsfpj\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562553 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-serving-cert\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562582 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsf6w\" (UniqueName: \"kubernetes.io/projected/4c4b7002-d97c-47bf-8de7-1361bcedc079-kube-api-access-lsf6w\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562614 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562655 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06807502-b6d9-4803-93bf-d8ac8e721ef3-serving-cert\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562732 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562765 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5q6p\" (UniqueName: \"kubernetes.io/projected/1b0d549d-439b-4011-8ea1-47e808a3b715-kube-api-access-s5q6p\") pod \"openshift-config-operator-7777fb866f-f2568\" (UID: \"1b0d549d-439b-4011-8ea1-47e808a3b715\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562785 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8dmb\" (UniqueName: \"kubernetes.io/projected/a6fe30bd-a140-4309-9156-52d361049059-kube-api-access-t8dmb\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562901 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-trusted-ca\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562953 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-oauth-config\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.562988 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lwgf\" (UID: \"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563011 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-config\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563041 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06807502-b6d9-4803-93bf-d8ac8e721ef3-audit-policies\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563078 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lwgf\" (UID: \"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563121 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563155 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-bound-sa-token\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563177 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsnq\" (UniqueName: \"kubernetes.io/projected/426e4581-f3d0-49ad-acf5-8466b46a993c-kube-api-access-kwsnq\") pod \"openshift-apiserver-operator-796bbdcf4f-cml5h\" (UID: \"426e4581-f3d0-49ad-acf5-8466b46a993c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563207 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdw7\" (UniqueName: \"kubernetes.io/projected/d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b-kube-api-access-dpdw7\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lwgf\" (UID: \"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563239 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563257 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06807502-b6d9-4803-93bf-d8ac8e721ef3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563318 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-trusted-ca-bundle\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563398 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfjx\" (UniqueName: \"kubernetes.io/projected/58e43c54-4e65-4ca6-9a52-f79c58a072d4-kube-api-access-4bfjx\") pod \"downloads-7954f5f757-slxzr\" (UID: \"58e43c54-4e65-4ca6-9a52-f79c58a072d4\") " pod="openshift-console/downloads-7954f5f757-slxzr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563455 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-client-ca\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563512 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06807502-b6d9-4803-93bf-d8ac8e721ef3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563595 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-etcd-ca\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563624 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-config\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563645 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06807502-b6d9-4803-93bf-d8ac8e721ef3-encryption-config\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563673 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563695 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/426e4581-f3d0-49ad-acf5-8466b46a993c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cml5h\" (UID: \"426e4581-f3d0-49ad-acf5-8466b46a993c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563715 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-config\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563734 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-829hz\" (UniqueName: \"kubernetes.io/projected/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-kube-api-access-829hz\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563770 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06807502-b6d9-4803-93bf-d8ac8e721ef3-etcd-client\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563791 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-certificates\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563808 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmdgz\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-kube-api-access-cmdgz\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563830 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1b0d549d-439b-4011-8ea1-47e808a3b715-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f2568\" (UID: \"1b0d549d-439b-4011-8ea1-47e808a3b715\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563851 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563935 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/426e4581-f3d0-49ad-acf5-8466b46a993c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cml5h\" (UID: \"426e4581-f3d0-49ad-acf5-8466b46a993c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.563990 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4b7002-d97c-47bf-8de7-1361bcedc079-serving-cert\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.564042 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txckl\" (UniqueName: \"kubernetes.io/projected/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-kube-api-access-txckl\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.564061 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a095f17c-1ff0-450a-93b7-1518f99771d9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zs7nt\" (UID: \"a095f17c-1ff0-450a-93b7-1518f99771d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.564081 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-tls\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: E0223 13:11:10.564108 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.064090902 +0000 UTC m=+225.745794590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.564174 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-serving-cert\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.564242 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-console-config\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.564289 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/598d1af4-7f4c-4815-8b0c-bd364fcc191d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.578035 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.597210 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.617981 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.639535 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.658500 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.665235 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:10 crc kubenswrapper[4851]: E0223 13:11:10.665599 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.165545146 +0000 UTC m=+225.847248864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.665700 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skktx\" (UniqueName: \"kubernetes.io/projected/a095f17c-1ff0-450a-93b7-1518f99771d9-kube-api-access-skktx\") pod \"cluster-samples-operator-665b6dd947-zs7nt\" (UID: \"a095f17c-1ff0-450a-93b7-1518f99771d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.665770 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06807502-b6d9-4803-93bf-d8ac8e721ef3-audit-dir\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.665815 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1da16125-ec58-487d-ae4e-16125c21bd0e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7g2rj\" (UID: \"1da16125-ec58-487d-ae4e-16125c21bd0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.665854 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmdz8\" (UniqueName: \"kubernetes.io/projected/ce808fab-8894-45af-86e6-5193f1de3201-kube-api-access-xmdz8\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.665917 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghf2\" (UniqueName: \"kubernetes.io/projected/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-kube-api-access-bghf2\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.665964 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0d549d-439b-4011-8ea1-47e808a3b715-serving-cert\") pod \"openshift-config-operator-7777fb866f-f2568\" (UID: \"1b0d549d-439b-4011-8ea1-47e808a3b715\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.665976 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/06807502-b6d9-4803-93bf-d8ac8e721ef3-audit-dir\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666039 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-etcd-service-ca\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666160 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-etcd-client\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666252 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/328f5d3c-6337-407d-a812-034d9d26069c-serving-cert\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666282 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/14bfc49a-35b3-4046-8ea4-ad199864f42a-tmpfs\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666319 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666371 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b5eed4cb-5efc-4449-9169-1375cb5e0dff-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gd96x\" (UID: \"b5eed4cb-5efc-4449-9169-1375cb5e0dff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666414 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-image-import-ca\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666449 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-certs\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666507 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-service-ca\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666549 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-oauth-serving-cert\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666875 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-config\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666950 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-etcd-service-ca\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.666948 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4401f548-7ea9-4246-9ece-dc05c1738ffe-signing-cabundle\") pod \"service-ca-9c57cc56f-z8hww\" (UID: \"4401f548-7ea9-4246-9ece-dc05c1738ffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667017 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-images\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667053 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1da16125-ec58-487d-ae4e-16125c21bd0e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7g2rj\" (UID: \"1da16125-ec58-487d-ae4e-16125c21bd0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667082 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvvw\" (UniqueName: \"kubernetes.io/projected/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-kube-api-access-2dvvw\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667106 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-images\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667132 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsfpj\" (UniqueName: \"kubernetes.io/projected/06807502-b6d9-4803-93bf-d8ac8e721ef3-kube-api-access-qsfpj\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667159 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-trusted-ca\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667186 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99ee0e3b-bdb8-4199-90e3-9c57e971f7b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bvzfw\" (UID: \"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667213 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-serving-cert\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667237 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-node-bootstrap-token\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667280 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsf6w\" (UniqueName: \"kubernetes.io/projected/4c4b7002-d97c-47bf-8de7-1361bcedc079-kube-api-access-lsf6w\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667380 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-service-ca\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667433 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-oauth-serving-cert\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667512 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667532 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06807502-b6d9-4803-93bf-d8ac8e721ef3-serving-cert\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667558 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f76f\" (UniqueName: \"kubernetes.io/projected/62353140-dab7-459f-b0d4-c796087cb3f9-kube-api-access-5f76f\") pod \"control-plane-machine-set-operator-78cbb6b69f-klwfn\" (UID: \"62353140-dab7-459f-b0d4-c796087cb3f9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667576 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-config\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667598 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce808fab-8894-45af-86e6-5193f1de3201-service-ca-bundle\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667620 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5q6p\" (UniqueName: \"kubernetes.io/projected/1b0d549d-439b-4011-8ea1-47e808a3b715-kube-api-access-s5q6p\") pod \"openshift-config-operator-7777fb866f-f2568\" (UID: \"1b0d549d-439b-4011-8ea1-47e808a3b715\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667637 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8dmb\" (UniqueName: \"kubernetes.io/projected/a6fe30bd-a140-4309-9156-52d361049059-kube-api-access-t8dmb\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667661 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ab5dc2-f915-4c20-9c85-2380f944bd44-serving-cert\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667680 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9ab5dc2-f915-4c20-9c85-2380f944bd44-encryption-config\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667868 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8f9a11-744a-4d48-8c6d-4ed59acc88a0-config\") pod \"kube-controller-manager-operator-78b949d7b-h8b2q\" (UID: \"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.667971 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-proxy-tls\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668011 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lwgf\" (UID: \"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668062 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/196d3fce-aab9-4dd5-82fd-7d442664af6e-metrics-tls\") pod \"dns-operator-744455d44c-zzqtr\" (UID: \"196d3fce-aab9-4dd5-82fd-7d442664af6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668086 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lwgf\" (UID: \"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668125 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668148 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c81e69e-6d53-4016-b87e-bdc816dc0365-secret-volume\") pod \"collect-profiles-29530860-zrt4b\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668167 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-bound-sa-token\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668188 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpdw7\" (UniqueName: \"kubernetes.io/projected/d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b-kube-api-access-dpdw7\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lwgf\" (UID: \"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668206 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668225 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06807502-b6d9-4803-93bf-d8ac8e721ef3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668242 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e8f9a11-744a-4d48-8c6d-4ed59acc88a0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h8b2q\" (UID: \"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668258 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ee0e3b-bdb8-4199-90e3-9c57e971f7b5-config\") pod \"kube-apiserver-operator-766d6c64bb-bvzfw\" (UID: \"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668274 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e8f9a11-744a-4d48-8c6d-4ed59acc88a0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h8b2q\" (UID: \"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668289 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b5eed4cb-5efc-4449-9169-1375cb5e0dff-srv-cert\") pod \"olm-operator-6b444d44fb-gd96x\" (UID: \"b5eed4cb-5efc-4449-9169-1375cb5e0dff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668304 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c81e69e-6d53-4016-b87e-bdc816dc0365-config-volume\") pod \"collect-profiles-29530860-zrt4b\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668341 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06807502-b6d9-4803-93bf-d8ac8e721ef3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668360 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-registration-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668390 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-etcd-ca\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668409 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9ab5dc2-f915-4c20-9c85-2380f944bd44-etcd-client\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668429 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ce808fab-8894-45af-86e6-5193f1de3201-default-certificate\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668446 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1439c07e-aa17-48f0-a415-35e5ffb0f512-serving-cert\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668463 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z9dx\" (UniqueName: \"kubernetes.io/projected/2644ea78-a197-4ce6-8c77-33c40c50e182-kube-api-access-4z9dx\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668485 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668532 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668554 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/426e4581-f3d0-49ad-acf5-8466b46a993c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cml5h\" (UID: \"426e4581-f3d0-49ad-acf5-8466b46a993c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668573 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668591 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg6vj\" (UniqueName: \"kubernetes.io/projected/328f5d3c-6337-407d-a812-034d9d26069c-kube-api-access-lg6vj\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668609 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872p9\" (UniqueName: \"kubernetes.io/projected/db8c127f-0258-443d-a5b7-308b252f957e-kube-api-access-872p9\") pod \"machine-config-controller-84d6567774-cwfqj\" (UID: \"db8c127f-0258-443d-a5b7-308b252f957e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668629 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06807502-b6d9-4803-93bf-d8ac8e721ef3-etcd-client\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668648 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1da16125-ec58-487d-ae4e-16125c21bd0e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7g2rj\" (UID: \"1da16125-ec58-487d-ae4e-16125c21bd0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668666 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wwn4t\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668685 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-metrics-tls\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668706 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwwsj\" (UniqueName: \"kubernetes.io/projected/3c81e69e-6d53-4016-b87e-bdc816dc0365-kube-api-access-lwwsj\") pod \"collect-profiles-29530860-zrt4b\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668728 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a095f17c-1ff0-450a-93b7-1518f99771d9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zs7nt\" (UID: \"a095f17c-1ff0-450a-93b7-1518f99771d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668746 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9ab5dc2-f915-4c20-9c85-2380f944bd44-audit-dir\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668764 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2mgdx\" (UID: \"6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668780 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-client-ca\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668798 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wwn4t\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668817 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8wv\" (UniqueName: \"kubernetes.io/projected/f440cbd8-c4b3-4191-8260-87162a1952fc-kube-api-access-sb8wv\") pod \"migrator-59844c95c7-nmrbr\" (UID: \"f440cbd8-c4b3-4191-8260-87162a1952fc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668839 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-console-config\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668855 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtm57\" (UniqueName: \"kubernetes.io/projected/3f659f30-3a3e-4031-a1bf-b26038294135-kube-api-access-qtm57\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668889 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffcjg\" (UniqueName: \"kubernetes.io/projected/4401f548-7ea9-4246-9ece-dc05c1738ffe-kube-api-access-ffcjg\") pod \"service-ca-9c57cc56f-z8hww\" (UID: \"4401f548-7ea9-4246-9ece-dc05c1738ffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668909 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/598d1af4-7f4c-4815-8b0c-bd364fcc191d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668948 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-audit-policies\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668948 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-images\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.668966 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.669304 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.669588 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-etcd-ca\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.669846 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lwgf\" (UID: \"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.670249 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jqd8\" (UniqueName: \"kubernetes.io/projected/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-kube-api-access-4jqd8\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.670936 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-etcd-client\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671039 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06807502-b6d9-4803-93bf-d8ac8e721ef3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671103 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-audit\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671133 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/598d1af4-7f4c-4815-8b0c-bd364fcc191d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671169 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd54j\" (UniqueName: \"kubernetes.io/projected/ce4841f9-f6b7-46d9-8a56-b2d532510d4b-kube-api-access-bd54j\") pod \"catalog-operator-68c6474976-scbph\" (UID: \"ce4841f9-f6b7-46d9-8a56-b2d532510d4b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671204 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx55d\" (UniqueName: \"kubernetes.io/projected/58acf679-26d7-4261-8304-f74997e9594f-kube-api-access-rx55d\") pod \"package-server-manager-789f6589d5-zgbsx\" (UID: \"58acf679-26d7-4261-8304-f74997e9594f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671171 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-console-config\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671311 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-plugins-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671386 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed030ce-3f90-43e6-9fa5-7df6237a69c4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-btfrr\" (UID: \"8ed030ce-3f90-43e6-9fa5-7df6237a69c4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671550 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/58acf679-26d7-4261-8304-f74997e9594f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zgbsx\" (UID: \"58acf679-26d7-4261-8304-f74997e9594f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671549 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/06807502-b6d9-4803-93bf-d8ac8e721ef3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671598 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b0d549d-439b-4011-8ea1-47e808a3b715-serving-cert\") pod \"openshift-config-operator-7777fb866f-f2568\" (UID: \"1b0d549d-439b-4011-8ea1-47e808a3b715\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671612 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1439c07e-aa17-48f0-a415-35e5ffb0f512-config\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671670 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1439c07e-aa17-48f0-a415-35e5ffb0f512-service-ca-bundle\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671733 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmwn\" (UniqueName: \"kubernetes.io/projected/b5eed4cb-5efc-4449-9169-1375cb5e0dff-kube-api-access-tnmwn\") pod \"olm-operator-6b444d44fb-gd96x\" (UID: \"b5eed4cb-5efc-4449-9169-1375cb5e0dff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671835 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671840 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06807502-b6d9-4803-93bf-d8ac8e721ef3-serving-cert\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.671955 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/598d1af4-7f4c-4815-8b0c-bd364fcc191d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672061 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-socket-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672123 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e2557a-2454-47ad-93e0-68e266e4b0cf-config\") pod \"service-ca-operator-777779d784-4fg88\" (UID: \"53e2557a-2454-47ad-93e0-68e266e4b0cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672195 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjnjg\" (UniqueName: \"kubernetes.io/projected/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-kube-api-access-jjnjg\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672269 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-config\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672367 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-serving-cert\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672385 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4401f548-7ea9-4246-9ece-dc05c1738ffe-signing-key\") pod \"service-ca-9c57cc56f-z8hww\" (UID: \"4401f548-7ea9-4246-9ece-dc05c1738ffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672490 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvlqt\" (UniqueName: \"kubernetes.io/projected/8ed030ce-3f90-43e6-9fa5-7df6237a69c4-kube-api-access-jvlqt\") pod \"kube-storage-version-migrator-operator-b67b599dd-btfrr\" (UID: \"8ed030ce-3f90-43e6-9fa5-7df6237a69c4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672552 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672603 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672693 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-mountpoint-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672745 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672801 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbzr\" (UniqueName: \"kubernetes.io/projected/e9ab5dc2-f915-4c20-9c85-2380f944bd44-kube-api-access-hvbzr\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672849 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14bfc49a-35b3-4046-8ea4-ad199864f42a-apiservice-cert\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672896 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14bfc49a-35b3-4046-8ea4-ad199864f42a-webhook-cert\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.672977 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nh7w\" (UniqueName: \"kubernetes.io/projected/196d3fce-aab9-4dd5-82fd-7d442664af6e-kube-api-access-9nh7w\") pod \"dns-operator-744455d44c-zzqtr\" (UID: \"196d3fce-aab9-4dd5-82fd-7d442664af6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673266 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2644ea78-a197-4ce6-8c77-33c40c50e182-bound-sa-token\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673359 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673423 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9ab5dc2-f915-4c20-9c85-2380f944bd44-node-pullsecrets\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673480 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszw9\" (UniqueName: \"kubernetes.io/projected/6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931-kube-api-access-lszw9\") pod \"multus-admission-controller-857f4d67dd-2mgdx\" (UID: \"6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673528 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f659f30-3a3e-4031-a1bf-b26038294135-audit-dir\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673583 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673638 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673714 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-trusted-ca\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673764 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-oauth-config\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673813 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-config\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673872 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06807502-b6d9-4803-93bf-d8ac8e721ef3-audit-policies\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673926 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd2tt\" (UniqueName: \"kubernetes.io/projected/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-kube-api-access-zd2tt\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.673984 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1439c07e-aa17-48f0-a415-35e5ffb0f512-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674044 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsnq\" (UniqueName: \"kubernetes.io/projected/426e4581-f3d0-49ad-acf5-8466b46a993c-kube-api-access-kwsnq\") pod \"openshift-apiserver-operator-796bbdcf4f-cml5h\" (UID: \"426e4581-f3d0-49ad-acf5-8466b46a993c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674098 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrmr\" (UniqueName: \"kubernetes.io/projected/53e2557a-2454-47ad-93e0-68e266e4b0cf-kube-api-access-8rrmr\") pod \"service-ca-operator-777779d784-4fg88\" (UID: \"53e2557a-2454-47ad-93e0-68e266e4b0cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674106 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a095f17c-1ff0-450a-93b7-1518f99771d9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zs7nt\" (UID: \"a095f17c-1ff0-450a-93b7-1518f99771d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674155 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-trusted-ca-bundle\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674225 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bfjx\" (UniqueName: \"kubernetes.io/projected/58e43c54-4e65-4ca6-9a52-f79c58a072d4-kube-api-access-4bfjx\") pod \"downloads-7954f5f757-slxzr\" (UID: \"58e43c54-4e65-4ca6-9a52-f79c58a072d4\") " pod="openshift-console/downloads-7954f5f757-slxzr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674274 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lwgf\" (UID: \"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674282 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-client-ca\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674362 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99ee0e3b-bdb8-4199-90e3-9c57e971f7b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bvzfw\" (UID: \"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674399 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-csi-data-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674400 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674427 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfwkh\" (UniqueName: \"kubernetes.io/projected/1439c07e-aa17-48f0-a415-35e5ffb0f512-kube-api-access-qfwkh\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674460 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6grxx\" (UniqueName: \"kubernetes.io/projected/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-kube-api-access-6grxx\") pod \"marketplace-operator-79b997595-wwn4t\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674534 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ce4841f9-f6b7-46d9-8a56-b2d532510d4b-srv-cert\") pod \"catalog-operator-68c6474976-scbph\" (UID: \"ce4841f9-f6b7-46d9-8a56-b2d532510d4b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674585 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674685 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-config\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674713 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06807502-b6d9-4803-93bf-d8ac8e721ef3-encryption-config\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674740 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ce808fab-8894-45af-86e6-5193f1de3201-stats-auth\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674752 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674781 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674810 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-serving-cert\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674890 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/62353140-dab7-459f-b0d4-c796087cb3f9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-klwfn\" (UID: \"62353140-dab7-459f-b0d4-c796087cb3f9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674913 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-config-volume\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674931 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxmf\" (UniqueName: \"kubernetes.io/projected/08e6c5b9-012d-4b1e-9704-b3cd1368a281-kube-api-access-vjxmf\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674959 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-config\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.674982 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-829hz\" (UniqueName: \"kubernetes.io/projected/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-kube-api-access-829hz\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675013 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/06807502-b6d9-4803-93bf-d8ac8e721ef3-audit-policies\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675030 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675052 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce808fab-8894-45af-86e6-5193f1de3201-metrics-certs\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675073 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-certificates\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675092 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmdgz\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-kube-api-access-cmdgz\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675114 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1b0d549d-439b-4011-8ea1-47e808a3b715-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f2568\" (UID: \"1b0d549d-439b-4011-8ea1-47e808a3b715\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675133 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: E0223 13:11:10.675155 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.175137548 +0000 UTC m=+225.856841226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675184 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r2ck\" (UniqueName: \"kubernetes.io/projected/29b83ba5-d2ac-44e2-9281-36db5ceb5d63-kube-api-access-2r2ck\") pod \"ingress-canary-cxc28\" (UID: \"29b83ba5-d2ac-44e2-9281-36db5ceb5d63\") " pod="openshift-ingress-canary/ingress-canary-cxc28" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675213 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/426e4581-f3d0-49ad-acf5-8466b46a993c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cml5h\" (UID: \"426e4581-f3d0-49ad-acf5-8466b46a993c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675240 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2644ea78-a197-4ce6-8c77-33c40c50e182-trusted-ca\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675263 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b83ba5-d2ac-44e2-9281-36db5ceb5d63-cert\") pod \"ingress-canary-cxc28\" (UID: \"29b83ba5-d2ac-44e2-9281-36db5ceb5d63\") " pod="openshift-ingress-canary/ingress-canary-cxc28" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.675849 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-config\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676280 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/426e4581-f3d0-49ad-acf5-8466b46a993c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cml5h\" (UID: \"426e4581-f3d0-49ad-acf5-8466b46a993c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676554 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4b7002-d97c-47bf-8de7-1361bcedc079-serving-cert\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676602 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txckl\" (UniqueName: \"kubernetes.io/projected/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-kube-api-access-txckl\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676624 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ce4841f9-f6b7-46d9-8a56-b2d532510d4b-profile-collector-cert\") pod \"catalog-operator-68c6474976-scbph\" (UID: \"ce4841f9-f6b7-46d9-8a56-b2d532510d4b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676646 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74jh\" (UniqueName: \"kubernetes.io/projected/14bfc49a-35b3-4046-8ea4-ad199864f42a-kube-api-access-q74jh\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676688 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8c127f-0258-443d-a5b7-308b252f957e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cwfqj\" (UID: \"db8c127f-0258-443d-a5b7-308b252f957e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676710 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-tls\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676730 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-serving-cert\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676750 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8c127f-0258-443d-a5b7-308b252f957e-proxy-tls\") pod \"machine-config-controller-84d6567774-cwfqj\" (UID: \"db8c127f-0258-443d-a5b7-308b252f957e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.676987 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-config\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.677470 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.677709 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-client-ca\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.678021 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/06807502-b6d9-4803-93bf-d8ac8e721ef3-encryption-config\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.678219 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-oauth-config\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.678273 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-etcd-serving-ca\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.678496 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-certificates\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.678591 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2644ea78-a197-4ce6-8c77-33c40c50e182-metrics-tls\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.678609 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/598d1af4-7f4c-4815-8b0c-bd364fcc191d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.678632 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ed030ce-3f90-43e6-9fa5-7df6237a69c4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-btfrr\" (UID: \"8ed030ce-3f90-43e6-9fa5-7df6237a69c4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.678652 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-trusted-ca\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.678730 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e2557a-2454-47ad-93e0-68e266e4b0cf-serving-cert\") pod \"service-ca-operator-777779d784-4fg88\" (UID: \"53e2557a-2454-47ad-93e0-68e266e4b0cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.679056 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-config\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.680410 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1b0d549d-439b-4011-8ea1-47e808a3b715-available-featuregates\") pod \"openshift-config-operator-7777fb866f-f2568\" (UID: \"1b0d549d-439b-4011-8ea1-47e808a3b715\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.680724 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-tls\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.681756 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/06807502-b6d9-4803-93bf-d8ac8e721ef3-etcd-client\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.682581 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.683274 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-trusted-ca-bundle\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.683495 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-serving-cert\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.683598 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4b7002-d97c-47bf-8de7-1361bcedc079-serving-cert\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.685060 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/426e4581-f3d0-49ad-acf5-8466b46a993c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cml5h\" (UID: \"426e4581-f3d0-49ad-acf5-8466b46a993c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.715725 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.721493 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.737573 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.757974 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.778653 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780555 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780761 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780793 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780825 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd2tt\" (UniqueName: \"kubernetes.io/projected/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-kube-api-access-zd2tt\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780847 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1439c07e-aa17-48f0-a415-35e5ffb0f512-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780866 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rrmr\" (UniqueName: \"kubernetes.io/projected/53e2557a-2454-47ad-93e0-68e266e4b0cf-kube-api-access-8rrmr\") pod \"service-ca-operator-777779d784-4fg88\" (UID: \"53e2557a-2454-47ad-93e0-68e266e4b0cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780891 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99ee0e3b-bdb8-4199-90e3-9c57e971f7b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bvzfw\" (UID: \"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780917 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-csi-data-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780955 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfwkh\" (UniqueName: \"kubernetes.io/projected/1439c07e-aa17-48f0-a415-35e5ffb0f512-kube-api-access-qfwkh\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780972 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6grxx\" (UniqueName: \"kubernetes.io/projected/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-kube-api-access-6grxx\") pod \"marketplace-operator-79b997595-wwn4t\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.780995 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ce4841f9-f6b7-46d9-8a56-b2d532510d4b-srv-cert\") pod \"catalog-operator-68c6474976-scbph\" (UID: \"ce4841f9-f6b7-46d9-8a56-b2d532510d4b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781012 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ce808fab-8894-45af-86e6-5193f1de3201-stats-auth\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781036 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/62353140-dab7-459f-b0d4-c796087cb3f9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-klwfn\" (UID: \"62353140-dab7-459f-b0d4-c796087cb3f9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781063 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-serving-cert\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781079 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-config-volume\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781105 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjxmf\" (UniqueName: \"kubernetes.io/projected/08e6c5b9-012d-4b1e-9704-b3cd1368a281-kube-api-access-vjxmf\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781123 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r2ck\" (UniqueName: \"kubernetes.io/projected/29b83ba5-d2ac-44e2-9281-36db5ceb5d63-kube-api-access-2r2ck\") pod \"ingress-canary-cxc28\" (UID: \"29b83ba5-d2ac-44e2-9281-36db5ceb5d63\") " pod="openshift-ingress-canary/ingress-canary-cxc28" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781142 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781159 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce808fab-8894-45af-86e6-5193f1de3201-metrics-certs\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781190 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b83ba5-d2ac-44e2-9281-36db5ceb5d63-cert\") pod \"ingress-canary-cxc28\" (UID: \"29b83ba5-d2ac-44e2-9281-36db5ceb5d63\") " pod="openshift-ingress-canary/ingress-canary-cxc28" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781218 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2644ea78-a197-4ce6-8c77-33c40c50e182-trusted-ca\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781243 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ce4841f9-f6b7-46d9-8a56-b2d532510d4b-profile-collector-cert\") pod \"catalog-operator-68c6474976-scbph\" (UID: \"ce4841f9-f6b7-46d9-8a56-b2d532510d4b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781270 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74jh\" (UniqueName: \"kubernetes.io/projected/14bfc49a-35b3-4046-8ea4-ad199864f42a-kube-api-access-q74jh\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781294 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8c127f-0258-443d-a5b7-308b252f957e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cwfqj\" (UID: \"db8c127f-0258-443d-a5b7-308b252f957e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781349 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8c127f-0258-443d-a5b7-308b252f957e-proxy-tls\") pod \"machine-config-controller-84d6567774-cwfqj\" (UID: \"db8c127f-0258-443d-a5b7-308b252f957e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781376 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2644ea78-a197-4ce6-8c77-33c40c50e182-metrics-tls\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781395 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ed030ce-3f90-43e6-9fa5-7df6237a69c4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-btfrr\" (UID: \"8ed030ce-3f90-43e6-9fa5-7df6237a69c4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781417 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-etcd-serving-ca\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781452 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e2557a-2454-47ad-93e0-68e266e4b0cf-serving-cert\") pod \"service-ca-operator-777779d784-4fg88\" (UID: \"53e2557a-2454-47ad-93e0-68e266e4b0cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781475 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1da16125-ec58-487d-ae4e-16125c21bd0e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7g2rj\" (UID: \"1da16125-ec58-487d-ae4e-16125c21bd0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781494 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmdz8\" (UniqueName: \"kubernetes.io/projected/ce808fab-8894-45af-86e6-5193f1de3201-kube-api-access-xmdz8\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781523 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bghf2\" (UniqueName: \"kubernetes.io/projected/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-kube-api-access-bghf2\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781546 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/328f5d3c-6337-407d-a812-034d9d26069c-serving-cert\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781565 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/14bfc49a-35b3-4046-8ea4-ad199864f42a-tmpfs\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781584 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781606 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b5eed4cb-5efc-4449-9169-1375cb5e0dff-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gd96x\" (UID: \"b5eed4cb-5efc-4449-9169-1375cb5e0dff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781627 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-image-import-ca\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781653 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-certs\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781671 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-config\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781694 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4401f548-7ea9-4246-9ece-dc05c1738ffe-signing-cabundle\") pod \"service-ca-9c57cc56f-z8hww\" (UID: \"4401f548-7ea9-4246-9ece-dc05c1738ffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781713 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1da16125-ec58-487d-ae4e-16125c21bd0e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7g2rj\" (UID: \"1da16125-ec58-487d-ae4e-16125c21bd0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781732 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dvvw\" (UniqueName: \"kubernetes.io/projected/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-kube-api-access-2dvvw\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781750 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-images\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781771 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-trusted-ca\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781793 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99ee0e3b-bdb8-4199-90e3-9c57e971f7b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bvzfw\" (UID: \"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781799 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-csi-data-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781817 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-node-bootstrap-token\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.781974 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f76f\" (UniqueName: \"kubernetes.io/projected/62353140-dab7-459f-b0d4-c796087cb3f9-kube-api-access-5f76f\") pod \"control-plane-machine-set-operator-78cbb6b69f-klwfn\" (UID: \"62353140-dab7-459f-b0d4-c796087cb3f9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782021 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ab5dc2-f915-4c20-9c85-2380f944bd44-serving-cert\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782062 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9ab5dc2-f915-4c20-9c85-2380f944bd44-encryption-config\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782101 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8f9a11-744a-4d48-8c6d-4ed59acc88a0-config\") pod \"kube-controller-manager-operator-78b949d7b-h8b2q\" (UID: \"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782136 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-config\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782169 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce808fab-8894-45af-86e6-5193f1de3201-service-ca-bundle\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782231 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-proxy-tls\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782266 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/196d3fce-aab9-4dd5-82fd-7d442664af6e-metrics-tls\") pod \"dns-operator-744455d44c-zzqtr\" (UID: \"196d3fce-aab9-4dd5-82fd-7d442664af6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782307 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e8f9a11-744a-4d48-8c6d-4ed59acc88a0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h8b2q\" (UID: \"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782372 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c81e69e-6d53-4016-b87e-bdc816dc0365-secret-volume\") pod \"collect-profiles-29530860-zrt4b\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782433 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ee0e3b-bdb8-4199-90e3-9c57e971f7b5-config\") pod \"kube-apiserver-operator-766d6c64bb-bvzfw\" (UID: \"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782469 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c81e69e-6d53-4016-b87e-bdc816dc0365-config-volume\") pod \"collect-profiles-29530860-zrt4b\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782504 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e8f9a11-744a-4d48-8c6d-4ed59acc88a0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h8b2q\" (UID: \"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782536 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b5eed4cb-5efc-4449-9169-1375cb5e0dff-srv-cert\") pod \"olm-operator-6b444d44fb-gd96x\" (UID: \"b5eed4cb-5efc-4449-9169-1375cb5e0dff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782570 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-registration-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782615 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ce808fab-8894-45af-86e6-5193f1de3201-default-certificate\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782651 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9ab5dc2-f915-4c20-9c85-2380f944bd44-etcd-client\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782699 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782733 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1439c07e-aa17-48f0-a415-35e5ffb0f512-serving-cert\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782767 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z9dx\" (UniqueName: \"kubernetes.io/projected/2644ea78-a197-4ce6-8c77-33c40c50e182-kube-api-access-4z9dx\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782802 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782845 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-872p9\" (UniqueName: \"kubernetes.io/projected/db8c127f-0258-443d-a5b7-308b252f957e-kube-api-access-872p9\") pod \"machine-config-controller-84d6567774-cwfqj\" (UID: \"db8c127f-0258-443d-a5b7-308b252f957e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782879 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782913 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg6vj\" (UniqueName: \"kubernetes.io/projected/328f5d3c-6337-407d-a812-034d9d26069c-kube-api-access-lg6vj\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782947 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1da16125-ec58-487d-ae4e-16125c21bd0e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7g2rj\" (UID: \"1da16125-ec58-487d-ae4e-16125c21bd0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.782983 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wwn4t\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:10 crc kubenswrapper[4851]: E0223 13:11:10.783034 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.282995767 +0000 UTC m=+225.964699475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783087 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-metrics-tls\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783137 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwwsj\" (UniqueName: \"kubernetes.io/projected/3c81e69e-6d53-4016-b87e-bdc816dc0365-kube-api-access-lwwsj\") pod \"collect-profiles-29530860-zrt4b\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783205 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-client-ca\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783263 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wwn4t\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783305 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8wv\" (UniqueName: \"kubernetes.io/projected/f440cbd8-c4b3-4191-8260-87162a1952fc-kube-api-access-sb8wv\") pod \"migrator-59844c95c7-nmrbr\" (UID: \"f440cbd8-c4b3-4191-8260-87162a1952fc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783373 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9ab5dc2-f915-4c20-9c85-2380f944bd44-audit-dir\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783408 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2mgdx\" (UID: \"6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783444 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtm57\" (UniqueName: \"kubernetes.io/projected/3f659f30-3a3e-4031-a1bf-b26038294135-kube-api-access-qtm57\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783505 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffcjg\" (UniqueName: \"kubernetes.io/projected/4401f548-7ea9-4246-9ece-dc05c1738ffe-kube-api-access-ffcjg\") pod \"service-ca-9c57cc56f-z8hww\" (UID: \"4401f548-7ea9-4246-9ece-dc05c1738ffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783542 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783616 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-audit-policies\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783664 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-audit\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783703 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd54j\" (UniqueName: \"kubernetes.io/projected/ce4841f9-f6b7-46d9-8a56-b2d532510d4b-kube-api-access-bd54j\") pod \"catalog-operator-68c6474976-scbph\" (UID: \"ce4841f9-f6b7-46d9-8a56-b2d532510d4b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783741 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783794 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-plugins-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783831 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed030ce-3f90-43e6-9fa5-7df6237a69c4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-btfrr\" (UID: \"8ed030ce-3f90-43e6-9fa5-7df6237a69c4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783873 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx55d\" (UniqueName: \"kubernetes.io/projected/58acf679-26d7-4261-8304-f74997e9594f-kube-api-access-rx55d\") pod \"package-server-manager-789f6589d5-zgbsx\" (UID: \"58acf679-26d7-4261-8304-f74997e9594f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783911 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/58acf679-26d7-4261-8304-f74997e9594f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zgbsx\" (UID: \"58acf679-26d7-4261-8304-f74997e9594f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783946 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1439c07e-aa17-48f0-a415-35e5ffb0f512-config\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.783987 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1439c07e-aa17-48f0-a415-35e5ffb0f512-service-ca-bundle\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.784038 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.784092 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnmwn\" (UniqueName: \"kubernetes.io/projected/b5eed4cb-5efc-4449-9169-1375cb5e0dff-kube-api-access-tnmwn\") pod \"olm-operator-6b444d44fb-gd96x\" (UID: \"b5eed4cb-5efc-4449-9169-1375cb5e0dff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.784141 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjnjg\" (UniqueName: \"kubernetes.io/projected/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-kube-api-access-jjnjg\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.784195 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-socket-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.784242 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e2557a-2454-47ad-93e0-68e266e4b0cf-config\") pod \"service-ca-operator-777779d784-4fg88\" (UID: \"53e2557a-2454-47ad-93e0-68e266e4b0cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.784911 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/14bfc49a-35b3-4046-8ea4-ad199864f42a-tmpfs\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.784970 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.785900 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-etcd-serving-ca\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.786053 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2644ea78-a197-4ce6-8c77-33c40c50e182-metrics-tls\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.786100 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-image-import-ca\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.786773 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.784289 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-config\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.786956 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4401f548-7ea9-4246-9ece-dc05c1738ffe-signing-key\") pod \"service-ca-9c57cc56f-z8hww\" (UID: \"4401f548-7ea9-4246-9ece-dc05c1738ffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787045 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvlqt\" (UniqueName: \"kubernetes.io/projected/8ed030ce-3f90-43e6-9fa5-7df6237a69c4-kube-api-access-jvlqt\") pod \"kube-storage-version-migrator-operator-b67b599dd-btfrr\" (UID: \"8ed030ce-3f90-43e6-9fa5-7df6237a69c4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787129 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-mountpoint-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787200 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787260 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787309 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787377 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvbzr\" (UniqueName: \"kubernetes.io/projected/e9ab5dc2-f915-4c20-9c85-2380f944bd44-kube-api-access-hvbzr\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787415 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14bfc49a-35b3-4046-8ea4-ad199864f42a-apiservice-cert\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787449 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14bfc49a-35b3-4046-8ea4-ad199864f42a-webhook-cert\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787502 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nh7w\" (UniqueName: \"kubernetes.io/projected/196d3fce-aab9-4dd5-82fd-7d442664af6e-kube-api-access-9nh7w\") pod \"dns-operator-744455d44c-zzqtr\" (UID: \"196d3fce-aab9-4dd5-82fd-7d442664af6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787576 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2644ea78-a197-4ce6-8c77-33c40c50e182-bound-sa-token\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787619 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszw9\" (UniqueName: \"kubernetes.io/projected/6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931-kube-api-access-lszw9\") pod \"multus-admission-controller-857f4d67dd-2mgdx\" (UID: \"6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787646 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-config\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787672 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f659f30-3a3e-4031-a1bf-b26038294135-audit-dir\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787726 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9ab5dc2-f915-4c20-9c85-2380f944bd44-node-pullsecrets\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787974 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e9ab5dc2-f915-4c20-9c85-2380f944bd44-node-pullsecrets\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.788247 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce808fab-8894-45af-86e6-5193f1de3201-service-ca-bundle\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.788834 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2644ea78-a197-4ce6-8c77-33c40c50e182-trusted-ca\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.789031 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-client-ca\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.789176 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9ab5dc2-f915-4c20-9c85-2380f944bd44-audit-dir\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.789944 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/db8c127f-0258-443d-a5b7-308b252f957e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cwfqj\" (UID: \"db8c127f-0258-443d-a5b7-308b252f957e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.789951 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1439c07e-aa17-48f0-a415-35e5ffb0f512-serving-cert\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.790088 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1439c07e-aa17-48f0-a415-35e5ffb0f512-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.790083 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ed030ce-3f90-43e6-9fa5-7df6237a69c4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-btfrr\" (UID: \"8ed030ce-3f90-43e6-9fa5-7df6237a69c4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.790491 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-config\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787134 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.787674 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce808fab-8894-45af-86e6-5193f1de3201-metrics-certs\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.790899 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/328f5d3c-6337-407d-a812-034d9d26069c-serving-cert\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.791351 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.791589 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.791845 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1439c07e-aa17-48f0-a415-35e5ffb0f512-service-ca-bundle\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.791848 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-registration-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.792367 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e9ab5dc2-f915-4c20-9c85-2380f944bd44-etcd-client\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.792494 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99ee0e3b-bdb8-4199-90e3-9c57e971f7b5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bvzfw\" (UID: \"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.792546 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99ee0e3b-bdb8-4199-90e3-9c57e971f7b5-config\") pod \"kube-apiserver-operator-766d6c64bb-bvzfw\" (UID: \"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.793617 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-audit-policies\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.794361 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e9ab5dc2-f915-4c20-9c85-2380f944bd44-audit\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.795782 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ce808fab-8894-45af-86e6-5193f1de3201-stats-auth\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.795937 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-socket-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.796168 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.796568 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.796644 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-mountpoint-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.797796 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f659f30-3a3e-4031-a1bf-b26038294135-audit-dir\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.797891 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/08e6c5b9-012d-4b1e-9704-b3cd1368a281-plugins-dir\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.798363 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9ab5dc2-f915-4c20-9c85-2380f944bd44-serving-cert\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.798414 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ed030ce-3f90-43e6-9fa5-7df6237a69c4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-btfrr\" (UID: \"8ed030ce-3f90-43e6-9fa5-7df6237a69c4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.798521 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1439c07e-aa17-48f0-a415-35e5ffb0f512-config\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.798929 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.799392 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/196d3fce-aab9-4dd5-82fd-7d442664af6e-metrics-tls\") pod \"dns-operator-744455d44c-zzqtr\" (UID: \"196d3fce-aab9-4dd5-82fd-7d442664af6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.799453 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e8f9a11-744a-4d48-8c6d-4ed59acc88a0-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-h8b2q\" (UID: \"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.800163 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.800812 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.801272 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ce808fab-8894-45af-86e6-5193f1de3201-default-certificate\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.801278 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e9ab5dc2-f915-4c20-9c85-2380f944bd44-encryption-config\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.801619 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.801763 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.802478 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.803341 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.807012 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8f9a11-744a-4d48-8c6d-4ed59acc88a0-config\") pod \"kube-controller-manager-operator-78b949d7b-h8b2q\" (UID: \"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.818364 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.837932 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.859067 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.870481 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1da16125-ec58-487d-ae4e-16125c21bd0e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7g2rj\" (UID: \"1da16125-ec58-487d-ae4e-16125c21bd0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.877221 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.889397 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: E0223 13:11:10.889886 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.389867429 +0000 UTC m=+226.071571097 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.898270 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.917207 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.925217 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1da16125-ec58-487d-ae4e-16125c21bd0e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7g2rj\" (UID: \"1da16125-ec58-487d-ae4e-16125c21bd0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.937725 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.957232 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.978166 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.990381 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:10 crc kubenswrapper[4851]: E0223 13:11:10.990603 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.490573612 +0000 UTC m=+226.172277290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.991434 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:10 crc kubenswrapper[4851]: E0223 13:11:10.991819 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.491800306 +0000 UTC m=+226.173504004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:10 crc kubenswrapper[4851]: I0223 13:11:10.997728 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.007098 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ce4841f9-f6b7-46d9-8a56-b2d532510d4b-srv-cert\") pod \"catalog-operator-68c6474976-scbph\" (UID: \"ce4841f9-f6b7-46d9-8a56-b2d532510d4b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.018811 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.024035 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ce4841f9-f6b7-46d9-8a56-b2d532510d4b-profile-collector-cert\") pod \"catalog-operator-68c6474976-scbph\" (UID: \"ce4841f9-f6b7-46d9-8a56-b2d532510d4b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.026368 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c81e69e-6d53-4016-b87e-bdc816dc0365-secret-volume\") pod \"collect-profiles-29530860-zrt4b\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.029425 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b5eed4cb-5efc-4449-9169-1375cb5e0dff-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gd96x\" (UID: \"b5eed4cb-5efc-4449-9169-1375cb5e0dff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.037715 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.058829 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.067978 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/62353140-dab7-459f-b0d4-c796087cb3f9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-klwfn\" (UID: \"62353140-dab7-459f-b0d4-c796087cb3f9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.077864 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.093029 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.093312 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.59327694 +0000 UTC m=+226.274980648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.093672 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.094061 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.594049911 +0000 UTC m=+226.275753589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.097052 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.117649 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.122955 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wwn4t\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.146414 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.150419 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wwn4t\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.155172 4851 request.go:700] Waited for 1.008288076s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.157671 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.177390 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.195758 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.195957 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.695926997 +0000 UTC m=+226.377630695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.197003 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.197663 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.198177 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.698160618 +0000 UTC m=+226.379864306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.218191 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.228060 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/db8c127f-0258-443d-a5b7-308b252f957e-proxy-tls\") pod \"machine-config-controller-84d6567774-cwfqj\" (UID: \"db8c127f-0258-443d-a5b7-308b252f957e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.251696 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64wl6\" (UniqueName: \"kubernetes.io/projected/49cf75f7-0c60-4281-b114-d43db3ea4e3c-kube-api-access-64wl6\") pod \"machine-approver-56656f9798-tx9dh\" (UID: \"49cf75f7-0c60-4281-b114-d43db3ea4e3c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.257299 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.268153 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-images\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.277271 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.297445 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.298769 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.299024 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.798992585 +0000 UTC m=+226.480696273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.299123 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.299751 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.799739445 +0000 UTC m=+226.481443123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.307596 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-proxy-tls\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.318164 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.332846 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2mgdx\" (UID: \"6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.337237 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.357950 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.361500 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14bfc49a-35b3-4046-8ea4-ad199864f42a-webhook-cert\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.361668 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14bfc49a-35b3-4046-8ea4-ad199864f42a-apiservice-cert\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.377987 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.388444 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/58acf679-26d7-4261-8304-f74997e9594f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zgbsx\" (UID: \"58acf679-26d7-4261-8304-f74997e9594f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.397572 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.400431 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.400587 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.900558062 +0000 UTC m=+226.582261750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.400732 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.401178 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:11.901158889 +0000 UTC m=+226.582862587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.406429 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-config\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.417097 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.438492 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.448038 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-serving-cert\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.478889 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.483734 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.486248 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-trusted-ca\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.497342 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.501882 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.502051 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.002032767 +0000 UTC m=+226.683736445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.502583 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.502928 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.002920641 +0000 UTC m=+226.684624319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.517031 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.532014 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b5eed4cb-5efc-4449-9169-1375cb5e0dff-srv-cert\") pod \"olm-operator-6b444d44fb-gd96x\" (UID: \"b5eed4cb-5efc-4449-9169-1375cb5e0dff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.539120 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.544873 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.558711 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: W0223 13:11:11.563105 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49cf75f7_0c60_4281_b114_d43db3ea4e3c.slice/crio-2fcdf4bf3bd22523b1a9e890cc8152be55fea9ea2a2c7eaf264654867f1ab377 WatchSource:0}: Error finding container 2fcdf4bf3bd22523b1a9e890cc8152be55fea9ea2a2c7eaf264654867f1ab377: Status 404 returned error can't find the container with id 2fcdf4bf3bd22523b1a9e890cc8152be55fea9ea2a2c7eaf264654867f1ab377 Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.579518 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.594098 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/4401f548-7ea9-4246-9ece-dc05c1738ffe-signing-key\") pod \"service-ca-9c57cc56f-z8hww\" (UID: \"4401f548-7ea9-4246-9ece-dc05c1738ffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.597528 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.603639 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.603842 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.103792919 +0000 UTC m=+226.785496607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.604948 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.605531 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.105510766 +0000 UTC m=+226.787214484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.606115 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/4401f548-7ea9-4246-9ece-dc05c1738ffe-signing-cabundle\") pod \"service-ca-9c57cc56f-z8hww\" (UID: \"4401f548-7ea9-4246-9ece-dc05c1738ffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.617821 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.638997 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.657113 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.667173 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53e2557a-2454-47ad-93e0-68e266e4b0cf-config\") pod \"service-ca-operator-777779d784-4fg88\" (UID: \"53e2557a-2454-47ad-93e0-68e266e4b0cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.676301 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.695897 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.705569 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.705822 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.205791818 +0000 UTC m=+226.887495506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.706135 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.707112 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.207101184 +0000 UTC m=+226.888804872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.710361 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53e2557a-2454-47ad-93e0-68e266e4b0cf-serving-cert\") pod \"service-ca-operator-777779d784-4fg88\" (UID: \"53e2557a-2454-47ad-93e0-68e266e4b0cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.717656 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.736961 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.743110 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c81e69e-6d53-4016-b87e-bdc816dc0365-config-volume\") pod \"collect-profiles-29530860-zrt4b\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.757215 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.778230 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.782597 4851 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.782713 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-node-bootstrap-token podName:e659c1af-dde1-4c75-9e48-b1c9a2c7f598 nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.28268829 +0000 UTC m=+226.964391978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-node-bootstrap-token") pod "machine-config-server-zwhdl" (UID: "e659c1af-dde1-4c75-9e48-b1c9a2c7f598") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.784181 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-config-volume\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.786272 4851 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.786963 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-certs podName:e659c1af-dde1-4c75-9e48-b1c9a2c7f598 nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.286930116 +0000 UTC m=+226.968633814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-certs") pod "machine-config-server-zwhdl" (UID: "e659c1af-dde1-4c75-9e48-b1c9a2c7f598") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.787936 4851 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.788043 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29b83ba5-d2ac-44e2-9281-36db5ceb5d63-cert podName:29b83ba5-d2ac-44e2-9281-36db5ceb5d63 nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.288015126 +0000 UTC m=+226.969719024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29b83ba5-d2ac-44e2-9281-36db5ceb5d63-cert") pod "ingress-canary-cxc28" (UID: "29b83ba5-d2ac-44e2-9281-36db5ceb5d63") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.789100 4851 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.789188 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-metrics-tls podName:3c34ba22-f973-4f34-b83f-3dcfdb3265d3 nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.289169008 +0000 UTC m=+226.970872896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-metrics-tls") pod "dns-default-lzbwt" (UID: "3c34ba22-f973-4f34-b83f-3dcfdb3265d3") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.797471 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.807706 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.807860 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.307831478 +0000 UTC m=+226.989535156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.808028 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.808445 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.308434994 +0000 UTC m=+226.990138672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.817887 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.837489 4851 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.857929 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.876916 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.877933 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" event={"ID":"49cf75f7-0c60-4281-b114-d43db3ea4e3c","Type":"ContainerStarted","Data":"111a0708b9584024615937d82dc388575e57e37eb17515259b2de8b300693d69"} Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.877982 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" event={"ID":"49cf75f7-0c60-4281-b114-d43db3ea4e3c","Type":"ContainerStarted","Data":"2fcdf4bf3bd22523b1a9e890cc8152be55fea9ea2a2c7eaf264654867f1ab377"} Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.897769 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.909952 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:11 crc kubenswrapper[4851]: E0223 13:11:11.910644 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.410630498 +0000 UTC m=+227.092334176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.917959 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.937473 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.957055 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.977558 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 13:11:11 crc kubenswrapper[4851]: I0223 13:11:11.996784 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.011251 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.011697 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.511678541 +0000 UTC m=+227.193382219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.017835 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.071678 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skktx\" (UniqueName: \"kubernetes.io/projected/a095f17c-1ff0-450a-93b7-1518f99771d9-kube-api-access-skktx\") pod \"cluster-samples-operator-665b6dd947-zs7nt\" (UID: \"a095f17c-1ff0-450a-93b7-1518f99771d9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.091160 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsf6w\" (UniqueName: \"kubernetes.io/projected/4c4b7002-d97c-47bf-8de7-1361bcedc079-kube-api-access-lsf6w\") pod \"controller-manager-879f6c89f-fcq25\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.110596 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsfpj\" (UniqueName: \"kubernetes.io/projected/06807502-b6d9-4803-93bf-d8ac8e721ef3-kube-api-access-qsfpj\") pod \"apiserver-7bbb656c7d-gjsl5\" (UID: \"06807502-b6d9-4803-93bf-d8ac8e721ef3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.112250 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.112422 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.612394125 +0000 UTC m=+227.294097823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.112613 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.112929 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.612911439 +0000 UTC m=+227.294615207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.128022 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.135826 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.154594 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8dmb\" (UniqueName: \"kubernetes.io/projected/a6fe30bd-a140-4309-9156-52d361049059-kube-api-access-t8dmb\") pod \"console-f9d7485db-x8scz\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.171237 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5q6p\" (UniqueName: \"kubernetes.io/projected/1b0d549d-439b-4011-8ea1-47e808a3b715-kube-api-access-s5q6p\") pod \"openshift-config-operator-7777fb866f-f2568\" (UID: \"1b0d549d-439b-4011-8ea1-47e808a3b715\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.174925 4851 request.go:700] Waited for 1.505042641s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/registry/token Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.194614 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-bound-sa-token\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.213613 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.213874 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.713844959 +0000 UTC m=+227.395548647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.216949 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpdw7\" (UniqueName: \"kubernetes.io/projected/d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b-kube-api-access-dpdw7\") pod \"openshift-controller-manager-operator-756b6f6bc6-6lwgf\" (UID: \"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.237524 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jqd8\" (UniqueName: \"kubernetes.io/projected/24b17f04-ab64-4a33-9c9b-1273fc5ae0ba-kube-api-access-4jqd8\") pod \"etcd-operator-b45778765-8r5vd\" (UID: \"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.254211 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.256100 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bfjx\" (UniqueName: \"kubernetes.io/projected/58e43c54-4e65-4ca6-9a52-f79c58a072d4-kube-api-access-4bfjx\") pod \"downloads-7954f5f757-slxzr\" (UID: \"58e43c54-4e65-4ca6-9a52-f79c58a072d4\") " pod="openshift-console/downloads-7954f5f757-slxzr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.279710 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.283241 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-829hz\" (UniqueName: \"kubernetes.io/projected/8f8399a9-b50e-4ccb-8ab8-3e245ab4f229-kube-api-access-829hz\") pod \"machine-api-operator-5694c8668f-dw6fk\" (UID: \"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.288925 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.293112 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsnq\" (UniqueName: \"kubernetes.io/projected/426e4581-f3d0-49ad-acf5-8466b46a993c-kube-api-access-kwsnq\") pod \"openshift-apiserver-operator-796bbdcf4f-cml5h\" (UID: \"426e4581-f3d0-49ad-acf5-8466b46a993c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.314962 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-node-bootstrap-token\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.315048 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-metrics-tls\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.315081 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmdgz\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-kube-api-access-cmdgz\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.315261 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.315323 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b83ba5-d2ac-44e2-9281-36db5ceb5d63-cert\") pod \"ingress-canary-cxc28\" (UID: \"29b83ba5-d2ac-44e2-9281-36db5ceb5d63\") " pod="openshift-ingress-canary/ingress-canary-cxc28" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.315420 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-certs\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.315968 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.815956801 +0000 UTC m=+227.497660479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.318412 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-metrics-tls\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.320456 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29b83ba5-d2ac-44e2-9281-36db5ceb5d63-cert\") pod \"ingress-canary-cxc28\" (UID: \"29b83ba5-d2ac-44e2-9281-36db5ceb5d63\") " pod="openshift-ingress-canary/ingress-canary-cxc28" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.324033 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-certs\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.324832 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-node-bootstrap-token\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.335139 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txckl\" (UniqueName: \"kubernetes.io/projected/b231b9e4-1f6a-42c3-a6e5-143aadf8b869-kube-api-access-txckl\") pod \"cluster-image-registry-operator-dc59b4c8b-5lrpf\" (UID: \"b231b9e4-1f6a-42c3-a6e5-143aadf8b869\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.351476 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt"] Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.354398 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.354891 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjxmf\" (UniqueName: \"kubernetes.io/projected/08e6c5b9-012d-4b1e-9704-b3cd1368a281-kube-api-access-vjxmf\") pod \"csi-hostpathplugin-xnbsq\" (UID: \"08e6c5b9-012d-4b1e-9704-b3cd1368a281\") " pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.364013 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-slxzr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.374935 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.375747 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfwkh\" (UniqueName: \"kubernetes.io/projected/1439c07e-aa17-48f0-a415-35e5ffb0f512-kube-api-access-qfwkh\") pod \"authentication-operator-69f744f599-tbl98\" (UID: \"1439c07e-aa17-48f0-a415-35e5ffb0f512\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.394871 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6grxx\" (UniqueName: \"kubernetes.io/projected/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-kube-api-access-6grxx\") pod \"marketplace-operator-79b997595-wwn4t\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.409238 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.415555 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.415728 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r2ck\" (UniqueName: \"kubernetes.io/projected/29b83ba5-d2ac-44e2-9281-36db5ceb5d63-kube-api-access-2r2ck\") pod \"ingress-canary-cxc28\" (UID: \"29b83ba5-d2ac-44e2-9281-36db5ceb5d63\") " pod="openshift-ingress-canary/ingress-canary-cxc28" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.416314 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.416695 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.916667684 +0000 UTC m=+227.598371372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.416894 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.417591 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:12.917503107 +0000 UTC m=+227.599206785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.438004 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bghf2\" (UniqueName: \"kubernetes.io/projected/1683ba7c-cadd-40a9-b9fd-6495d888a3c8-kube-api-access-bghf2\") pod \"console-operator-58897d9998-mgv55\" (UID: \"1683ba7c-cadd-40a9-b9fd-6495d888a3c8\") " pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.441441 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fcq25"] Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.448107 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.453078 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.457804 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1da16125-ec58-487d-ae4e-16125c21bd0e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7g2rj\" (UID: \"1da16125-ec58-487d-ae4e-16125c21bd0e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.466663 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.475858 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmdz8\" (UniqueName: \"kubernetes.io/projected/ce808fab-8894-45af-86e6-5193f1de3201-kube-api-access-xmdz8\") pod \"router-default-5444994796-k2qrn\" (UID: \"ce808fab-8894-45af-86e6-5193f1de3201\") " pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.491379 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dvvw\" (UniqueName: \"kubernetes.io/projected/3c34ba22-f973-4f34-b83f-3dcfdb3265d3-kube-api-access-2dvvw\") pod \"dns-default-lzbwt\" (UID: \"3c34ba22-f973-4f34-b83f-3dcfdb3265d3\") " pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.492481 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.518108 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.519023 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.019006562 +0000 UTC m=+227.700710240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.519985 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-872p9\" (UniqueName: \"kubernetes.io/projected/db8c127f-0258-443d-a5b7-308b252f957e-kube-api-access-872p9\") pod \"machine-config-controller-84d6567774-cwfqj\" (UID: \"db8c127f-0258-443d-a5b7-308b252f957e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.520044 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5"] Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.523542 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.529773 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.534141 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtm57\" (UniqueName: \"kubernetes.io/projected/3f659f30-3a3e-4031-a1bf-b26038294135-kube-api-access-qtm57\") pod \"oauth-openshift-558db77b4-gllrl\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.539106 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cxc28" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.563952 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffcjg\" (UniqueName: \"kubernetes.io/projected/4401f548-7ea9-4246-9ece-dc05c1738ffe-kube-api-access-ffcjg\") pod \"service-ca-9c57cc56f-z8hww\" (UID: \"4401f548-7ea9-4246-9ece-dc05c1738ffe\") " pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.582252 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwwsj\" (UniqueName: \"kubernetes.io/projected/3c81e69e-6d53-4016-b87e-bdc816dc0365-kube-api-access-lwwsj\") pod \"collect-profiles-29530860-zrt4b\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.594649 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x8scz"] Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.609698 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74jh\" (UniqueName: \"kubernetes.io/projected/14bfc49a-35b3-4046-8ea4-ad199864f42a-kube-api-access-q74jh\") pod \"packageserver-d55dfcdfc-5f58w\" (UID: \"14bfc49a-35b3-4046-8ea4-ad199864f42a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.611924 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8wv\" (UniqueName: \"kubernetes.io/projected/f440cbd8-c4b3-4191-8260-87162a1952fc-kube-api-access-sb8wv\") pod \"migrator-59844c95c7-nmrbr\" (UID: \"f440cbd8-c4b3-4191-8260-87162a1952fc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.619432 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.619799 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.119785538 +0000 UTC m=+227.801489216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.629743 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.647430 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/99ee0e3b-bdb8-4199-90e3-9c57e971f7b5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bvzfw\" (UID: \"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.651658 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.660206 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.661899 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd2tt\" (UniqueName: \"kubernetes.io/projected/9d716e8a-6f21-4e2b-9c41-dfb813c86a6b-kube-api-access-zd2tt\") pod \"machine-config-operator-74547568cd-9qcnx\" (UID: \"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.673908 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.677362 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z9dx\" (UniqueName: \"kubernetes.io/projected/2644ea78-a197-4ce6-8c77-33c40c50e182-kube-api-access-4z9dx\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.684838 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.686833 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.696209 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rrmr\" (UniqueName: \"kubernetes.io/projected/53e2557a-2454-47ad-93e0-68e266e4b0cf-kube-api-access-8rrmr\") pod \"service-ca-operator-777779d784-4fg88\" (UID: \"53e2557a-2454-47ad-93e0-68e266e4b0cf\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.701098 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.712636 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e8f9a11-744a-4d48-8c6d-4ed59acc88a0-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-h8b2q\" (UID: \"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.721166 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.722978 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.221375346 +0000 UTC m=+227.903079024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.723121 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.723456 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.223443342 +0000 UTC m=+227.905147020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.726794 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.747010 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.749662 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dw6fk"] Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.764937 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.767213 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg6vj\" (UniqueName: \"kubernetes.io/projected/328f5d3c-6337-407d-a812-034d9d26069c-kube-api-access-lg6vj\") pod \"route-controller-manager-6576b87f9c-cnt7k\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.771892 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.779587 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.784503 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f76f\" (UniqueName: \"kubernetes.io/projected/62353140-dab7-459f-b0d4-c796087cb3f9-kube-api-access-5f76f\") pod \"control-plane-machine-set-operator-78cbb6b69f-klwfn\" (UID: \"62353140-dab7-459f-b0d4-c796087cb3f9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.784759 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd54j\" (UniqueName: \"kubernetes.io/projected/ce4841f9-f6b7-46d9-8a56-b2d532510d4b-kube-api-access-bd54j\") pod \"catalog-operator-68c6474976-scbph\" (UID: \"ce4841f9-f6b7-46d9-8a56-b2d532510d4b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.788491 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.807188 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvlqt\" (UniqueName: \"kubernetes.io/projected/8ed030ce-3f90-43e6-9fa5-7df6237a69c4-kube-api-access-jvlqt\") pod \"kube-storage-version-migrator-operator-b67b599dd-btfrr\" (UID: \"8ed030ce-3f90-43e6-9fa5-7df6237a69c4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.810350 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2644ea78-a197-4ce6-8c77-33c40c50e182-bound-sa-token\") pod \"ingress-operator-5b745b69d9-966l7\" (UID: \"2644ea78-a197-4ce6-8c77-33c40c50e182\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:12 crc kubenswrapper[4851]: W0223 13:11:12.818394 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce808fab_8894_45af_86e6_5193f1de3201.slice/crio-1928130ae03306b4957b6e21e55bce79f6b2f1779818aa637eead486d4a792bb WatchSource:0}: Error finding container 1928130ae03306b4957b6e21e55bce79f6b2f1779818aa637eead486d4a792bb: Status 404 returned error can't find the container with id 1928130ae03306b4957b6e21e55bce79f6b2f1779818aa637eead486d4a792bb Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.823848 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.824273 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.324257769 +0000 UTC m=+228.005961447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.831512 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvbzr\" (UniqueName: \"kubernetes.io/projected/e9ab5dc2-f915-4c20-9c85-2380f944bd44-kube-api-access-hvbzr\") pod \"apiserver-76f77b778f-bgglr\" (UID: \"e9ab5dc2-f915-4c20-9c85-2380f944bd44\") " pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.853197 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-f2568"] Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.855504 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx55d\" (UniqueName: \"kubernetes.io/projected/58acf679-26d7-4261-8304-f74997e9594f-kube-api-access-rx55d\") pod \"package-server-manager-789f6589d5-zgbsx\" (UID: \"58acf679-26d7-4261-8304-f74997e9594f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.870848 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-slxzr"] Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.880288 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszw9\" (UniqueName: \"kubernetes.io/projected/6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931-kube-api-access-lszw9\") pod \"multus-admission-controller-857f4d67dd-2mgdx\" (UID: \"6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.896346 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.899791 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nh7w\" (UniqueName: \"kubernetes.io/projected/196d3fce-aab9-4dd5-82fd-7d442664af6e-kube-api-access-9nh7w\") pod \"dns-operator-744455d44c-zzqtr\" (UID: \"196d3fce-aab9-4dd5-82fd-7d442664af6e\") " pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.906549 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x8scz" event={"ID":"a6fe30bd-a140-4309-9156-52d361049059","Type":"ContainerStarted","Data":"f674dd0348bddec83931ad897ace598a56667bfa3598a7df66155b166879e1cc"} Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.907487 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.913660 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.913673 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" event={"ID":"4c4b7002-d97c-47bf-8de7-1361bcedc079","Type":"ContainerStarted","Data":"f153bb346ad2b8755ca03dd04f67d2779cf210b115433afe205bd5498666bc2c"} Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.913854 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" event={"ID":"4c4b7002-d97c-47bf-8de7-1361bcedc079","Type":"ContainerStarted","Data":"524122f8c57c937475283f2293f7b1ef9e9a35f778f9b2f7701536395063b262"} Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.913933 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.914098 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjnjg\" (UniqueName: \"kubernetes.io/projected/e659c1af-dde1-4c75-9e48-b1c9a2c7f598-kube-api-access-jjnjg\") pod \"machine-config-server-zwhdl\" (UID: \"e659c1af-dde1-4c75-9e48-b1c9a2c7f598\") " pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.920599 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.923582 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k2qrn" event={"ID":"ce808fab-8894-45af-86e6-5193f1de3201","Type":"ContainerStarted","Data":"1928130ae03306b4957b6e21e55bce79f6b2f1779818aa637eead486d4a792bb"} Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.924545 4851 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fcq25 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.924590 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" podUID="4c4b7002-d97c-47bf-8de7-1361bcedc079" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.925348 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:12 crc kubenswrapper[4851]: E0223 13:11:12.925775 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.425762664 +0000 UTC m=+228.107466342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.927399 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" event={"ID":"a095f17c-1ff0-450a-93b7-1518f99771d9","Type":"ContainerStarted","Data":"84260e5840720d44569d0fc31d2f59b267f2cdf95ba281e9e3be1187b3c02f36"} Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.927434 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" event={"ID":"a095f17c-1ff0-450a-93b7-1518f99771d9","Type":"ContainerStarted","Data":"c9a4a7d1de0cd4420f56cc24a0a3feb29e279a1c065ed5cd536bf6c4396d379d"} Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.929027 4851 csr.go:261] certificate signing request csr-s5gg9 is approved, waiting to be issued Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.936093 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" event={"ID":"49cf75f7-0c60-4281-b114-d43db3ea4e3c","Type":"ContainerStarted","Data":"7e200636f2db6c37e9bc755d330df7fc03d05b56e276410c9988d9eac42b15cc"} Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.936101 4851 csr.go:257] certificate signing request csr-s5gg9 is issued Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.936849 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.942733 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" event={"ID":"06807502-b6d9-4803-93bf-d8ac8e721ef3","Type":"ContainerStarted","Data":"ef83d90f4d85ce082c4252b94124d29ec8395d85aa87527f2cc2d192734aec5c"} Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.944041 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.955728 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnmwn\" (UniqueName: \"kubernetes.io/projected/b5eed4cb-5efc-4449-9169-1375cb5e0dff-kube-api-access-tnmwn\") pod \"olm-operator-6b444d44fb-gd96x\" (UID: \"b5eed4cb-5efc-4449-9169-1375cb5e0dff\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.971557 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" Feb 23 13:11:12 crc kubenswrapper[4851]: I0223 13:11:12.998895 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.018646 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.019979 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mgv55"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.027092 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.027296 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.527269839 +0000 UTC m=+228.208973517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.027652 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.028052 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.5280336 +0000 UTC m=+228.209737278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.028394 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wwn4t"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.042991 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.043641 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.057979 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.115058 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.129414 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.129516 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.629494664 +0000 UTC m=+228.311198342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.129689 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.129965 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.629956537 +0000 UTC m=+228.311660225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.151520 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zwhdl" Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.206074 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8r5vd"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.207083 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.231274 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.231720 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.731704439 +0000 UTC m=+228.413408117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.333311 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.333835 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.833819231 +0000 UTC m=+228.515522919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: W0223 13:11:13.347177 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b17f04_ab64_4a33_9c9b_1273fc5ae0ba.slice/crio-9f3f7caf5d7f3875efac560a3c5852a35bc0da792d68a74a6fdd16eed0030289 WatchSource:0}: Error finding container 9f3f7caf5d7f3875efac560a3c5852a35bc0da792d68a74a6fdd16eed0030289: Status 404 returned error can't find the container with id 9f3f7caf5d7f3875efac560a3c5852a35bc0da792d68a74a6fdd16eed0030289 Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.381599 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tbl98"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.383948 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xnbsq"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.393193 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cxc28"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.436145 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.436487 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:13.936464467 +0000 UTC m=+228.618168145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.538231 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.538644 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.038632971 +0000 UTC m=+228.720336649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.639054 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.639497 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.139469818 +0000 UTC m=+228.821173496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.639868 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.640304 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.14028807 +0000 UTC m=+228.821991748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.740773 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.740921 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.24089096 +0000 UTC m=+228.922594638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.741051 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.741436 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.241425315 +0000 UTC m=+228.923128993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.842610 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.843024 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.342997112 +0000 UTC m=+229.024700850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.874173 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gllrl"] Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.881558 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr"] Feb 23 13:11:13 crc kubenswrapper[4851]: W0223 13:11:13.898475 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf440cbd8_c4b3_4191_8260_87162a1952fc.slice/crio-9a667e259296b4b5f8a0a003a65c45791c6bb67debf96302f294a67e9af90a86 WatchSource:0}: Error finding container 9a667e259296b4b5f8a0a003a65c45791c6bb67debf96302f294a67e9af90a86: Status 404 returned error can't find the container with id 9a667e259296b4b5f8a0a003a65c45791c6bb67debf96302f294a67e9af90a86 Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.946269 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-23 13:06:12 +0000 UTC, rotation deadline is 2026-12-25 19:39:46.745865563 +0000 UTC Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.946573 4851 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7326h28m32.799297019s for next certificate rotation Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.950313 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:13 crc kubenswrapper[4851]: E0223 13:11:13.950970 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.450951604 +0000 UTC m=+229.132655282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.995067 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mgv55" event={"ID":"1683ba7c-cadd-40a9-b9fd-6495d888a3c8","Type":"ContainerStarted","Data":"964d8a5f80210705604f4ab771242cab1c12a604d5c968d4c0e64b274cd46bae"} Feb 23 13:11:13 crc kubenswrapper[4851]: I0223 13:11:13.995114 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mgv55" event={"ID":"1683ba7c-cadd-40a9-b9fd-6495d888a3c8","Type":"ContainerStarted","Data":"457447672a8bd9aece173a284f0387c9389de61a6b5d441fa72493d787dab29a"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.026500 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-slxzr" event={"ID":"58e43c54-4e65-4ca6-9a52-f79c58a072d4","Type":"ContainerStarted","Data":"ea5fb9909ba701ade515b1492e2b0756420b643bb23219bca4b4a3266b2b532f"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.027999 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-slxzr" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.033196 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.033252 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.034625 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.040426 4851 patch_prober.go:28] interesting pod/downloads-7954f5f757-slxzr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.040483 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-slxzr" podUID="58e43c54-4e65-4ca6-9a52-f79c58a072d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.051819 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.052508 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.55248212 +0000 UTC m=+229.234185828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.072464 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" event={"ID":"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4","Type":"ContainerStarted","Data":"e1d5b8fc9d296c19ee80f54824c3e88ec8bb7eed963e34745f387f68bead21c4"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.072506 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" event={"ID":"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4","Type":"ContainerStarted","Data":"8df3ec11ef14c94d6c9207b3c8e5c5b2fd00318f279566716642002450739175"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.073993 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.081637 4851 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wwn4t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.081796 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.086019 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2mgdx"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.114468 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.114630 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bgglr"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.146628 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k2qrn" event={"ID":"ce808fab-8894-45af-86e6-5193f1de3201","Type":"ContainerStarted","Data":"efa0f4747e12a7dfde4982f3a4c13b87140859a7e4965be86bab68d57e1d47e8"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.148962 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.154969 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.156695 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.656679958 +0000 UTC m=+229.338383636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.160228 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.177584 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" event={"ID":"a095f17c-1ff0-450a-93b7-1518f99771d9","Type":"ContainerStarted","Data":"2b2ad74956401a7e4c907fd35578d28080597e5b63c6f4e226efeb27c1e0c8e6"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.200762 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zwhdl" event={"ID":"e659c1af-dde1-4c75-9e48-b1c9a2c7f598","Type":"ContainerStarted","Data":"acd326df04d6d8211811521b8dbb0ac0d352093f328ef260a81d2a3f84092821"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.200795 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zwhdl" event={"ID":"e659c1af-dde1-4c75-9e48-b1c9a2c7f598","Type":"ContainerStarted","Data":"12c2671f15baccb9e5e3dd195f6f42b370f67ab97674768fce8ff32f2753fbfb"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.209401 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-966l7"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.210529 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.213274 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.213302 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" event={"ID":"1439c07e-aa17-48f0-a415-35e5ffb0f512","Type":"ContainerStarted","Data":"f28629eeedc79f344b6381ce2770f692d5530ad94a9fb9fcaf83759b924a58ac"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.220246 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z8hww"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.229061 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" event={"ID":"1b0d549d-439b-4011-8ea1-47e808a3b715","Type":"ContainerStarted","Data":"1cde192e1e1c5b8553ba35b339b234f934ad3fac9606de68a31ab3381de1ad9c"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.242424 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zzqtr"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.244004 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.252581 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-4fg88"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.252745 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" event={"ID":"08e6c5b9-012d-4b1e-9704-b3cd1368a281","Type":"ContainerStarted","Data":"111dda0736fa60d7b79a0a7579333f9d22fc63bc20ce3b1031bde14bfcf0fc19"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.255751 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.255854 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.755839359 +0000 UTC m=+229.437543037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.256094 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.257612 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.757598967 +0000 UTC m=+229.439302645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.272497 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x8scz" event={"ID":"a6fe30bd-a140-4309-9156-52d361049059","Type":"ContainerStarted","Data":"f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b"} Feb 23 13:11:14 crc kubenswrapper[4851]: W0223 13:11:14.275040 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ed030ce_3f90_43e6_9fa5_7df6237a69c4.slice/crio-c76bf3716061014d3745bf3bbee30dcbd79b00625542bfb2744b574ca69aa891 WatchSource:0}: Error finding container c76bf3716061014d3745bf3bbee30dcbd79b00625542bfb2744b574ca69aa891: Status 404 returned error can't find the container with id c76bf3716061014d3745bf3bbee30dcbd79b00625542bfb2744b574ca69aa891 Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.275768 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" event={"ID":"b231b9e4-1f6a-42c3-a6e5-143aadf8b869","Type":"ContainerStarted","Data":"e2797657e89d96ac926853d75c242ba07bf466cb3b8a41850cf94afa1298fe3e"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.275801 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" event={"ID":"b231b9e4-1f6a-42c3-a6e5-143aadf8b869","Type":"ContainerStarted","Data":"680006cdfe80e13f626374a9c2e999d999216eb327ca4a7ce5fbfd1f3d7da342"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.278686 4851 generic.go:334] "Generic (PLEG): container finished" podID="06807502-b6d9-4803-93bf-d8ac8e721ef3" containerID="f5863e08e5db89f1c513f85728f68b4087ff38745e11004993349930b3f4d320" exitCode=0 Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.279369 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" event={"ID":"06807502-b6d9-4803-93bf-d8ac8e721ef3","Type":"ContainerDied","Data":"f5863e08e5db89f1c513f85728f68b4087ff38745e11004993349930b3f4d320"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.283738 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" event={"ID":"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b","Type":"ContainerStarted","Data":"bd67546a3f6380c74d68ce68b00cb94a60e551c7bd393c8109cd8583ab1db191"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.283787 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" event={"ID":"d0196ed8-f5ec-4fc9-907a-2e19c4a5e14b","Type":"ContainerStarted","Data":"b30ad3d98bcb33c9ccb96da31cf8c3a87cfea5b11fb0bc718e0da437542d5ee6"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.293676 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" event={"ID":"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba","Type":"ContainerStarted","Data":"9f3f7caf5d7f3875efac560a3c5852a35bc0da792d68a74a6fdd16eed0030289"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.302623 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" event={"ID":"3f659f30-3a3e-4031-a1bf-b26038294135","Type":"ContainerStarted","Data":"49b4825bee39bf0948b2a9dd96ac94970b222ba879130587d77a4f14286d3346"} Feb 23 13:11:14 crc kubenswrapper[4851]: W0223 13:11:14.304044 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb8c127f_0258_443d_a5b7_308b252f957e.slice/crio-68dec6d030ce3cb84d95a2d23e0921d55cc00f651ad9b8cd63291c00b985a4a5 WatchSource:0}: Error finding container 68dec6d030ce3cb84d95a2d23e0921d55cc00f651ad9b8cd63291c00b985a4a5: Status 404 returned error can't find the container with id 68dec6d030ce3cb84d95a2d23e0921d55cc00f651ad9b8cd63291c00b985a4a5 Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.311661 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cxc28" event={"ID":"29b83ba5-d2ac-44e2-9281-36db5ceb5d63","Type":"ContainerStarted","Data":"04646359df74a3146fca9776797bc0d669a2c3f09677add0df12e517e280ae3a"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.311715 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cxc28" event={"ID":"29b83ba5-d2ac-44e2-9281-36db5ceb5d63","Type":"ContainerStarted","Data":"a5c5f3cdf3df2f096e803cb9ee250c8ab74fde5f3dd0ffac469a68b3e94e020e"} Feb 23 13:11:14 crc kubenswrapper[4851]: W0223 13:11:14.316836 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196d3fce_aab9_4dd5_82fd_7d442664af6e.slice/crio-8dadc0c7f9721de397b16372479260db3d710d50ad97cf464b3ed32ec9cca612 WatchSource:0}: Error finding container 8dadc0c7f9721de397b16372479260db3d710d50ad97cf464b3ed32ec9cca612: Status 404 returned error can't find the container with id 8dadc0c7f9721de397b16372479260db3d710d50ad97cf464b3ed32ec9cca612 Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.318209 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" event={"ID":"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229","Type":"ContainerStarted","Data":"82314e94c94895c25424402b2d7aa4fa8d071cbb87498f526a5f84105b6af9b9"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.318252 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" event={"ID":"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229","Type":"ContainerStarted","Data":"a68a113011f94ffb9a270335525d933a09daa6265addd1c7b8d97785efe7defd"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.318684 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.322099 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" event={"ID":"426e4581-f3d0-49ad-acf5-8466b46a993c","Type":"ContainerStarted","Data":"d9bc171b2e4b60b23d8c6c534c21240d48d637e4b9072068fdc78717b348e144"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.322123 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" event={"ID":"426e4581-f3d0-49ad-acf5-8466b46a993c","Type":"ContainerStarted","Data":"2e2b1a3290c7e9f5517da6289c893887350bde18808302ac1974231e8f222689"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.334281 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr" event={"ID":"f440cbd8-c4b3-4191-8260-87162a1952fc","Type":"ContainerStarted","Data":"9a667e259296b4b5f8a0a003a65c45791c6bb67debf96302f294a67e9af90a86"} Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.352596 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.370123 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" podStartSLOduration=172.370098963 podStartE2EDuration="2m52.370098963s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.369721343 +0000 UTC m=+229.051425041" watchObservedRunningTime="2026-02-23 13:11:14.370098963 +0000 UTC m=+229.051802641" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.373772 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.376803 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.876776775 +0000 UTC m=+229.558480453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.387871 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.395219 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lzbwt"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.395277 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx"] Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.396799 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k"] Feb 23 13:11:14 crc kubenswrapper[4851]: W0223 13:11:14.466056 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58acf679_26d7_4261_8304_f74997e9594f.slice/crio-967546839d22caba0b847bfd9f057074afc945a1dd1b219e3d06207d1d7be53a WatchSource:0}: Error finding container 967546839d22caba0b847bfd9f057074afc945a1dd1b219e3d06207d1d7be53a: Status 404 returned error can't find the container with id 967546839d22caba0b847bfd9f057074afc945a1dd1b219e3d06207d1d7be53a Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.479307 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.479612 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:14.979600197 +0000 UTC m=+229.661303865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.531295 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tx9dh" podStartSLOduration=173.53127913 podStartE2EDuration="2m53.53127913s" podCreationTimestamp="2026-02-23 13:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.530699964 +0000 UTC m=+229.212403652" watchObservedRunningTime="2026-02-23 13:11:14.53127913 +0000 UTC m=+229.212982808" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.581102 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.581427 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.08139816 +0000 UTC m=+229.763101838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.581608 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.582046 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.082038968 +0000 UTC m=+229.763742646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.675895 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.682487 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:14 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:14 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:14 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.682548 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.682893 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.683349 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.183270125 +0000 UTC m=+229.864973803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.692888 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x8scz" podStartSLOduration=172.692871968 podStartE2EDuration="2m52.692871968s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.690725339 +0000 UTC m=+229.372429017" watchObservedRunningTime="2026-02-23 13:11:14.692871968 +0000 UTC m=+229.374575646" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.729315 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zs7nt" podStartSLOduration=173.729297704 podStartE2EDuration="2m53.729297704s" podCreationTimestamp="2026-02-23 13:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.728635506 +0000 UTC m=+229.410339184" watchObservedRunningTime="2026-02-23 13:11:14.729297704 +0000 UTC m=+229.411001382" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.788491 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.789572 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.289554491 +0000 UTC m=+229.971258169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.825994 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" podStartSLOduration=172.825975557 podStartE2EDuration="2m52.825975557s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.775685232 +0000 UTC m=+229.457388910" watchObservedRunningTime="2026-02-23 13:11:14.825975557 +0000 UTC m=+229.507679235" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.844786 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" podStartSLOduration=173.844765561 podStartE2EDuration="2m53.844765561s" podCreationTimestamp="2026-02-23 13:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.823615803 +0000 UTC m=+229.505319501" watchObservedRunningTime="2026-02-23 13:11:14.844765561 +0000 UTC m=+229.526469239" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.890072 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.890421 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.390406249 +0000 UTC m=+230.072109927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.904082 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-k2qrn" podStartSLOduration=172.904065682 podStartE2EDuration="2m52.904065682s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.869681882 +0000 UTC m=+229.551385570" watchObservedRunningTime="2026-02-23 13:11:14.904065682 +0000 UTC m=+229.585769360" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.904732 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6lwgf" podStartSLOduration=172.904726531 podStartE2EDuration="2m52.904726531s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.904667409 +0000 UTC m=+229.586371107" watchObservedRunningTime="2026-02-23 13:11:14.904726531 +0000 UTC m=+229.586430209" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.927666 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" podStartSLOduration=172.927635887 podStartE2EDuration="2m52.927635887s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.925982222 +0000 UTC m=+229.607685910" watchObservedRunningTime="2026-02-23 13:11:14.927635887 +0000 UTC m=+229.609339565" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.973174 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zwhdl" podStartSLOduration=4.973156152 podStartE2EDuration="4.973156152s" podCreationTimestamp="2026-02-23 13:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:14.971216339 +0000 UTC m=+229.652920017" watchObservedRunningTime="2026-02-23 13:11:14.973156152 +0000 UTC m=+229.654859830" Feb 23 13:11:14 crc kubenswrapper[4851]: I0223 13:11:14.991916 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:14 crc kubenswrapper[4851]: E0223 13:11:14.992406 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.492389897 +0000 UTC m=+230.174093575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.055770 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-slxzr" podStartSLOduration=173.05575284 podStartE2EDuration="2m53.05575284s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.054011022 +0000 UTC m=+229.735714720" watchObservedRunningTime="2026-02-23 13:11:15.05575284 +0000 UTC m=+229.737456518" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.093788 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.094160 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.594142439 +0000 UTC m=+230.275846117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.147934 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" podStartSLOduration=173.14791951 podStartE2EDuration="2m53.14791951s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.147692094 +0000 UTC m=+229.829395792" watchObservedRunningTime="2026-02-23 13:11:15.14791951 +0000 UTC m=+229.829623188" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.180032 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5lrpf" podStartSLOduration=173.180016117 podStartE2EDuration="2m53.180016117s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.179974366 +0000 UTC m=+229.861678054" watchObservedRunningTime="2026-02-23 13:11:15.180016117 +0000 UTC m=+229.861719795" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.199555 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.200015 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.700002494 +0000 UTC m=+230.381706172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.230786 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cml5h" podStartSLOduration=174.230768715 podStartE2EDuration="2m54.230768715s" podCreationTimestamp="2026-02-23 13:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.228926335 +0000 UTC m=+229.910630003" watchObservedRunningTime="2026-02-23 13:11:15.230768715 +0000 UTC m=+229.912472393" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.264983 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cxc28" podStartSLOduration=5.26496764 podStartE2EDuration="5.26496764s" podCreationTimestamp="2026-02-23 13:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.262352469 +0000 UTC m=+229.944056147" watchObservedRunningTime="2026-02-23 13:11:15.26496764 +0000 UTC m=+229.946671318" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.301739 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.302492 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.802472156 +0000 UTC m=+230.484175834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.349383 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lzbwt" event={"ID":"3c34ba22-f973-4f34-b83f-3dcfdb3265d3","Type":"ContainerStarted","Data":"c179755e820f0ab51ea1db0679189ac7c766a89133a21982169045d347f6cdc0"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.355812 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" event={"ID":"1da16125-ec58-487d-ae4e-16125c21bd0e","Type":"ContainerStarted","Data":"d317971f63d9933698244a2e9699efcad129e259e4ecc853dc443a27159656e2"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.355886 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" event={"ID":"1da16125-ec58-487d-ae4e-16125c21bd0e","Type":"ContainerStarted","Data":"92f18fd5118e266c30c9e69a2398654c84b8e0473cdd56f27d2d23789cd32bd3"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.376470 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" event={"ID":"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5","Type":"ContainerStarted","Data":"0e4d42e7a86ad8f507eba98337b58a33629eb9801a3ba545f11ca00f8ced6faf"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.376551 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" event={"ID":"99ee0e3b-bdb8-4199-90e3-9c57e971f7b5","Type":"ContainerStarted","Data":"64a4b17d5171d927f6eba1648eec3d7047dc7fbf353db1a603151638c6cf9dda"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.390754 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7g2rj" podStartSLOduration=173.383128041 podStartE2EDuration="2m53.383128041s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.377840266 +0000 UTC m=+230.059543944" watchObservedRunningTime="2026-02-23 13:11:15.383128041 +0000 UTC m=+230.064831719" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.404520 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.404627 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvzfw" podStartSLOduration=173.404616008 podStartE2EDuration="2m53.404616008s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.404153426 +0000 UTC m=+230.085857104" watchObservedRunningTime="2026-02-23 13:11:15.404616008 +0000 UTC m=+230.086319686" Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.412894 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:15.912875364 +0000 UTC m=+230.594579052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.427106 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tbl98" event={"ID":"1439c07e-aa17-48f0-a415-35e5ffb0f512","Type":"ContainerStarted","Data":"7dd1ba494baa052e4009fdf16d3bfb79236da6d0f85487f000c208f8869b0aff"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.437060 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" event={"ID":"db8c127f-0258-443d-a5b7-308b252f957e","Type":"ContainerStarted","Data":"68dec6d030ce3cb84d95a2d23e0921d55cc00f651ad9b8cd63291c00b985a4a5"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.438069 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" event={"ID":"62353140-dab7-459f-b0d4-c796087cb3f9","Type":"ContainerStarted","Data":"f1afdd747bc20dfb3e6974a7be7e79e67dbb747ab2e99053f13d11a95ba4a9ef"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.441867 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" event={"ID":"14bfc49a-35b3-4046-8ea4-ad199864f42a","Type":"ContainerStarted","Data":"49254aa2d8ea533da49142a6f74dfa19fe23c2b466616bed768b890e3aa28d87"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.441909 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" event={"ID":"14bfc49a-35b3-4046-8ea4-ad199864f42a","Type":"ContainerStarted","Data":"f1a59cf4e7f64a7b09800f956b1913e96c3e3d17d6b0af504c341df112ee76ad"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.445796 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.468751 4851 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5f58w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.468807 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" podUID="14bfc49a-35b3-4046-8ea4-ad199864f42a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.477232 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" event={"ID":"53e2557a-2454-47ad-93e0-68e266e4b0cf","Type":"ContainerStarted","Data":"bd0fe87285663a1da5f7b6bec6bbbea17c5a5f4efb7e743f82de4b00df455d9f"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.477273 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" event={"ID":"53e2557a-2454-47ad-93e0-68e266e4b0cf","Type":"ContainerStarted","Data":"590004f2ebf99d71701228e16d60fe7725e23d16b08a2200c5e47a7fd44e6415"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.499560 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" podStartSLOduration=173.499536944 podStartE2EDuration="2m53.499536944s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.476602017 +0000 UTC m=+230.158305715" watchObservedRunningTime="2026-02-23 13:11:15.499536944 +0000 UTC m=+230.181240622" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.502259 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" event={"ID":"4401f548-7ea9-4246-9ece-dc05c1738ffe","Type":"ContainerStarted","Data":"d432022f5a58a9350e7b7f8a199b668e9a86f4817674386f4fbf25d7262a9983"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.502291 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" event={"ID":"4401f548-7ea9-4246-9ece-dc05c1738ffe","Type":"ContainerStarted","Data":"7b3abb4b5914b10af7ded8185e9e6dfe488e18b66711591498cf3488dcd1ceba"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.516837 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.516959 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.016932559 +0000 UTC m=+230.698636227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.517578 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.519755 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.019723106 +0000 UTC m=+230.701426784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.534635 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-z8hww" podStartSLOduration=173.534614733 podStartE2EDuration="2m53.534614733s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.530597133 +0000 UTC m=+230.212300811" watchObservedRunningTime="2026-02-23 13:11:15.534614733 +0000 UTC m=+230.216318411" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.535056 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-4fg88" podStartSLOduration=173.535050605 podStartE2EDuration="2m53.535050605s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.502096084 +0000 UTC m=+230.183799762" watchObservedRunningTime="2026-02-23 13:11:15.535050605 +0000 UTC m=+230.216754283" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.551830 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" event={"ID":"b5eed4cb-5efc-4449-9169-1375cb5e0dff","Type":"ContainerStarted","Data":"175ff947b6521fea6c509521ee4ac7fca70afc2b5dff76a9cc1f6f8c26a26846"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.551892 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" event={"ID":"b5eed4cb-5efc-4449-9169-1375cb5e0dff","Type":"ContainerStarted","Data":"52d852f08e4b5ce643456a4f00824bbaa05bdf23188391201cec65fec25471aa"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.552471 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.560045 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" event={"ID":"ce4841f9-f6b7-46d9-8a56-b2d532510d4b","Type":"ContainerStarted","Data":"581c3617242b5557155e4b38a7d79ce13d6033ebb1a737b7c9afb93064802031"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.583989 4851 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-gd96x container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.584046 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" podUID="b5eed4cb-5efc-4449-9169-1375cb5e0dff" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.597078 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8r5vd" event={"ID":"24b17f04-ab64-4a33-9c9b-1273fc5ae0ba","Type":"ContainerStarted","Data":"d7f81168fed7a8d18ce348fafa586180ddd009b961ae7ca04e01b98a6f679690"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.618801 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.619029 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.1189958 +0000 UTC m=+230.800699478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.619407 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.619902 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.119889574 +0000 UTC m=+230.801593252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.620465 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" event={"ID":"8ed030ce-3f90-43e6-9fa5-7df6237a69c4","Type":"ContainerStarted","Data":"6917f0285802ee04e6ffa3050494396339677ea671df5c0cecf5fa7c3d92b183"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.622414 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" event={"ID":"8ed030ce-3f90-43e6-9fa5-7df6237a69c4","Type":"ContainerStarted","Data":"c76bf3716061014d3745bf3bbee30dcbd79b00625542bfb2744b574ca69aa891"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.653834 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" event={"ID":"08e6c5b9-012d-4b1e-9704-b3cd1368a281","Type":"ContainerStarted","Data":"d74dfb190d5132538c816a941edc55fb899019576dbb619510f69b715ac9e83d"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.666241 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" podStartSLOduration=173.666219911 podStartE2EDuration="2m53.666219911s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.605323286 +0000 UTC m=+230.287026974" watchObservedRunningTime="2026-02-23 13:11:15.666219911 +0000 UTC m=+230.347923589" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.679031 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" event={"ID":"328f5d3c-6337-407d-a812-034d9d26069c","Type":"ContainerStarted","Data":"73621a130c67afb601f20461f1678004692ea945f7e8dfc41b6305a8395ef557"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.685548 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:15 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:15 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:15 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.685613 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.685926 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" event={"ID":"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0","Type":"ContainerStarted","Data":"4e79352f10c24cdcc21a269a8937652f46351a21d178482f089ecb62458f5628"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.722740 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.723475 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.223422925 +0000 UTC m=+230.905126613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.726574 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" event={"ID":"6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931","Type":"ContainerStarted","Data":"40c7e51a9abe9fdc279332a05e180eb5722e2dacd122b84dc92942d516d0d5e8"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.756733 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" event={"ID":"e9ab5dc2-f915-4c20-9c85-2380f944bd44","Type":"ContainerStarted","Data":"7c57c87c881ec17debdff72fd3f79eb7c13a7f23d7a08eaff7b722cb7958a712"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.763162 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr" event={"ID":"f440cbd8-c4b3-4191-8260-87162a1952fc","Type":"ContainerStarted","Data":"c5f6d7a382d9ff08adf061920e3e3af1b1ad6e1ee45cba251a31bc45003bfd3f"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.763306 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr" event={"ID":"f440cbd8-c4b3-4191-8260-87162a1952fc","Type":"ContainerStarted","Data":"2beaba27e9b77fe35334014cc879273ee3cf404da73116e4e9c298dd1df73cee"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.788713 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-btfrr" podStartSLOduration=173.78869767 podStartE2EDuration="2m53.78869767s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.664140834 +0000 UTC m=+230.345844512" watchObservedRunningTime="2026-02-23 13:11:15.78869767 +0000 UTC m=+230.470401348" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.795478 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" event={"ID":"3c81e69e-6d53-4016-b87e-bdc816dc0365","Type":"ContainerStarted","Data":"342045dc568ecac4114c95ec0358ef99796f41d82855f90db7b3e80db51cc128"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.795625 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" event={"ID":"3c81e69e-6d53-4016-b87e-bdc816dc0365","Type":"ContainerStarted","Data":"90198e4021b53441d058efbe5fb8f26042d814621c5e22df926fecf0885c4248"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.801787 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-slxzr" event={"ID":"58e43c54-4e65-4ca6-9a52-f79c58a072d4","Type":"ContainerStarted","Data":"0be2c14774adbc6b1d33199664aa2fc62183de6b52f938603e13e6a6b8ebdea2"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.805270 4851 patch_prober.go:28] interesting pod/downloads-7954f5f757-slxzr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.805352 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-slxzr" podUID="58e43c54-4e65-4ca6-9a52-f79c58a072d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.826567 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.827918 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.327906252 +0000 UTC m=+231.009609930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.841211 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" event={"ID":"196d3fce-aab9-4dd5-82fd-7d442664af6e","Type":"ContainerStarted","Data":"8dadc0c7f9721de397b16372479260db3d710d50ad97cf464b3ed32ec9cca612"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.843078 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" podStartSLOduration=173.843067046 podStartE2EDuration="2m53.843067046s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.841692519 +0000 UTC m=+230.523396217" watchObservedRunningTime="2026-02-23 13:11:15.843067046 +0000 UTC m=+230.524770724" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.853847 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nmrbr" podStartSLOduration=173.85382382 podStartE2EDuration="2m53.85382382s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.78979572 +0000 UTC m=+230.471499398" watchObservedRunningTime="2026-02-23 13:11:15.85382382 +0000 UTC m=+230.535527498" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.865110 4851 generic.go:334] "Generic (PLEG): container finished" podID="1b0d549d-439b-4011-8ea1-47e808a3b715" containerID="73cd31beed362893a949f18be419a2001e51f4e1538d54adcc0cf6ea12cc84c9" exitCode=0 Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.865253 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" event={"ID":"1b0d549d-439b-4011-8ea1-47e808a3b715","Type":"ContainerDied","Data":"73cd31beed362893a949f18be419a2001e51f4e1538d54adcc0cf6ea12cc84c9"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.865283 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" event={"ID":"1b0d549d-439b-4011-8ea1-47e808a3b715","Type":"ContainerStarted","Data":"675ee5cfce54bfdaee12e84dee19caed4727d9e97aa73d80b8ed5ba9d6fb9769"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.866679 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.888769 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" event={"ID":"58acf679-26d7-4261-8304-f74997e9594f","Type":"ContainerStarted","Data":"967546839d22caba0b847bfd9f057074afc945a1dd1b219e3d06207d1d7be53a"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.895547 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" podStartSLOduration=173.895528231 podStartE2EDuration="2m53.895528231s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:15.891013617 +0000 UTC m=+230.572717295" watchObservedRunningTime="2026-02-23 13:11:15.895528231 +0000 UTC m=+230.577231909" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.904268 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" event={"ID":"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b","Type":"ContainerStarted","Data":"8c6f7f376673d9f307ddcd01dd7fdb7e27f4678b1898e8cfd9217faa5bf12b39"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.932382 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:15 crc kubenswrapper[4851]: E0223 13:11:15.933757 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.433732375 +0000 UTC m=+231.115436053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.986008 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dw6fk" event={"ID":"8f8399a9-b50e-4ccb-8ab8-3e245ab4f229","Type":"ContainerStarted","Data":"d3d7764a849f1696a21e585caaa52a50dbf6ec8a3386d8cef954a01fbbd0fef6"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.987180 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" event={"ID":"2644ea78-a197-4ce6-8c77-33c40c50e182","Type":"ContainerStarted","Data":"33e510532dfc6f4479ba6f444cc751f63f12ab9b43e4f7fc805df43eede99023"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.987205 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" event={"ID":"2644ea78-a197-4ce6-8c77-33c40c50e182","Type":"ContainerStarted","Data":"903e06a9de7c82288e5c5dbca3c9379f5aa6fccc2991bad87559c083558382b1"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.989646 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" event={"ID":"3f659f30-3a3e-4031-a1bf-b26038294135","Type":"ContainerStarted","Data":"d997d42df974edc95876bd437e03126d94648f2d4ae2608048f6d7bd9ab7e8c8"} Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.989672 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.991111 4851 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wwn4t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.991151 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.993140 4851 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gllrl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" start-of-body= Feb 23 13:11:15 crc kubenswrapper[4851]: I0223 13:11:15.993164 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" podUID="3f659f30-3a3e-4031-a1bf-b26038294135" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.18:6443/healthz\": dial tcp 10.217.0.18:6443: connect: connection refused" Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.037095 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.038457 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.538444948 +0000 UTC m=+231.220148626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.138554 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.144698 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.644677773 +0000 UTC m=+231.326381441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.245992 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.246300 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.746286291 +0000 UTC m=+231.427989969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.347253 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.347889 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.847873488 +0000 UTC m=+231.529577166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.415155 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" podStartSLOduration=175.415138288 podStartE2EDuration="2m55.415138288s" podCreationTimestamp="2026-02-23 13:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:16.396066486 +0000 UTC m=+231.077770184" watchObservedRunningTime="2026-02-23 13:11:16.415138288 +0000 UTC m=+231.096841966" Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.449251 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.449628 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:16.94961212 +0000 UTC m=+231.631315808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.549941 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.550358 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.050342784 +0000 UTC m=+231.732046462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.651314 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.651676 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.151659754 +0000 UTC m=+231.833363432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.678881 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:16 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:16 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:16 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.679203 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.752251 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.752381 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.252362158 +0000 UTC m=+231.934065836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.752674 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.752992 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.252984705 +0000 UTC m=+231.934688383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.855814 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.855982 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.35595346 +0000 UTC m=+232.037657138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.856678 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.857043 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.357030649 +0000 UTC m=+232.038734327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:16 crc kubenswrapper[4851]: I0223 13:11:16.957500 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:16 crc kubenswrapper[4851]: E0223 13:11:16.957934 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.457914938 +0000 UTC m=+232.139618616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.035530 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" event={"ID":"6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931","Type":"ContainerStarted","Data":"39963ec037930b62047ff85b9695251a9c25f5003565dbe8cc5e1b319052311e"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.035879 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" event={"ID":"6b663dd5-e1f2-4c9d-8f8a-96e3f3b80931","Type":"ContainerStarted","Data":"501c5e023a7fd3aa9fe488c524e30ce1688c20d403f67b16fe457c7b8604ba60"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.044411 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" event={"ID":"62353140-dab7-459f-b0d4-c796087cb3f9","Type":"ContainerStarted","Data":"9593d0aab1f3716d0ecdb80ac7c078e7ff5bc3559065c62c5dcc74615df8eac6"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.057617 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" event={"ID":"328f5d3c-6337-407d-a812-034d9d26069c","Type":"ContainerStarted","Data":"8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.059177 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.059688 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.060800 4851 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-cnt7k container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.060852 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" podUID="328f5d3c-6337-407d-a812-034d9d26069c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.061927 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.561912601 +0000 UTC m=+232.243616279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.062921 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mgv55" podStartSLOduration=175.062908848 podStartE2EDuration="2m55.062908848s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:16.489904512 +0000 UTC m=+231.171608200" watchObservedRunningTime="2026-02-23 13:11:17.062908848 +0000 UTC m=+231.744612526" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.063706 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2mgdx" podStartSLOduration=175.06370022 podStartE2EDuration="2m55.06370022s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.062440066 +0000 UTC m=+231.744143774" watchObservedRunningTime="2026-02-23 13:11:17.06370022 +0000 UTC m=+231.745403898" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.073259 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" event={"ID":"06807502-b6d9-4803-93bf-d8ac8e721ef3","Type":"ContainerStarted","Data":"969f912a07c6b2f1fd7d77f5f64500f211bad6616e7e3f1a92df9d9d86d1a3b4"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.082923 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" event={"ID":"7e8f9a11-744a-4d48-8c6d-4ed59acc88a0","Type":"ContainerStarted","Data":"a46ddf0aa1fd8ea672d11804397bbd9caad675a79380e087b8bebaaafb9a5e3b"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.102157 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" event={"ID":"2644ea78-a197-4ce6-8c77-33c40c50e182","Type":"ContainerStarted","Data":"6e3dc22fc4f33bfc7ab7f5e633e40e6c4c5c210f3d457c9ed1f2e8d47e03155a"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.110933 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lzbwt" event={"ID":"3c34ba22-f973-4f34-b83f-3dcfdb3265d3","Type":"ContainerStarted","Data":"93467919276bac51a3009d85e79c4eb765590c2f6e46dda4ba021302de2b7d4b"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.110976 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lzbwt" event={"ID":"3c34ba22-f973-4f34-b83f-3dcfdb3265d3","Type":"ContainerStarted","Data":"e121850a4d173c3ecf2e50e9c7a9621e7dff2c192e486509367ef45c00097452"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.111516 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.126783 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" event={"ID":"196d3fce-aab9-4dd5-82fd-7d442664af6e","Type":"ContainerStarted","Data":"01f234aa76ed00cd703cd758e28a027cf74523f29b3a42b546244273d2e2b104"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.127037 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" event={"ID":"196d3fce-aab9-4dd5-82fd-7d442664af6e","Type":"ContainerStarted","Data":"5ad6d80a332987154ee3524f432387057c5ae73fdd599414b147e9b6df9979d9"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.153546 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-klwfn" podStartSLOduration=175.153531506 podStartE2EDuration="2m55.153531506s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.115685241 +0000 UTC m=+231.797388919" watchObservedRunningTime="2026-02-23 13:11:17.153531506 +0000 UTC m=+231.835235184" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.154982 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" podStartSLOduration=175.154974576 podStartE2EDuration="2m55.154974576s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.15477551 +0000 UTC m=+231.836479198" watchObservedRunningTime="2026-02-23 13:11:17.154974576 +0000 UTC m=+231.836678254" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.158515 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" event={"ID":"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b","Type":"ContainerStarted","Data":"fd9957072d58c1643ef55425a5a94271c4d9394aba6feacda784c9e4b27404fb"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.158557 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" event={"ID":"9d716e8a-6f21-4e2b-9c41-dfb813c86a6b","Type":"ContainerStarted","Data":"ff52244769a9fc8d443c40c726c4b8c589a9731fa13e0c6740f4caf1fd8e02f7"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.161668 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.162948 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.662933753 +0000 UTC m=+232.344637431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.187012 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" event={"ID":"db8c127f-0258-443d-a5b7-308b252f957e","Type":"ContainerStarted","Data":"6d339d8d52070967c6aea58e808614ea5a8f1bd2a3c602412519487c752fd801"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.187055 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" event={"ID":"db8c127f-0258-443d-a5b7-308b252f957e","Type":"ContainerStarted","Data":"7ce51857523bd9920eafefde495b4439390e3afc973c70115c280c425a8004a3"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.208967 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" event={"ID":"58acf679-26d7-4261-8304-f74997e9594f","Type":"ContainerStarted","Data":"24057a9dfa1ed12169b7d60dcc6c80dd03d04e83d79b817a9f7dc7d1fc3c3345"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.209166 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" event={"ID":"58acf679-26d7-4261-8304-f74997e9594f","Type":"ContainerStarted","Data":"8a7de51e1bb256b55dc3e0c495bc599dad3848e28db0797a2364b9e6dbc4b50a"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.210091 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.217813 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" event={"ID":"ce4841f9-f6b7-46d9-8a56-b2d532510d4b","Type":"ContainerStarted","Data":"ad9fda0ec8d87bf37654f4c72b4b693981b339880afc342e86fd8388e47d057d"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.218107 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zzqtr" podStartSLOduration=175.218091061 podStartE2EDuration="2m55.218091061s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.216737794 +0000 UTC m=+231.898441462" watchObservedRunningTime="2026-02-23 13:11:17.218091061 +0000 UTC m=+231.899794759" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.218705 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.228011 4851 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-scbph container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.228063 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" podUID="ce4841f9-f6b7-46d9-8a56-b2d532510d4b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.233770 4851 generic.go:334] "Generic (PLEG): container finished" podID="e9ab5dc2-f915-4c20-9c85-2380f944bd44" containerID="d57110d28338033f8da05df46b309acfc3971d5a5be7d43e50a6639cdd23c9a6" exitCode=0 Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.235055 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" event={"ID":"e9ab5dc2-f915-4c20-9c85-2380f944bd44","Type":"ContainerDied","Data":"d57110d28338033f8da05df46b309acfc3971d5a5be7d43e50a6639cdd23c9a6"} Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.237136 4851 patch_prober.go:28] interesting pod/downloads-7954f5f757-slxzr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.237362 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-slxzr" podUID="58e43c54-4e65-4ca6-9a52-f79c58a072d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.238986 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.264774 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.269056 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.769041764 +0000 UTC m=+232.450745442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.292155 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mgv55" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.292208 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.292465 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.325962 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gd96x" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.338384 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-h8b2q" podStartSLOduration=175.33836606 podStartE2EDuration="2m55.33836606s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.327059961 +0000 UTC m=+232.008763649" watchObservedRunningTime="2026-02-23 13:11:17.33836606 +0000 UTC m=+232.020069738" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.341029 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-966l7" podStartSLOduration=175.341017962 podStartE2EDuration="2m55.341017962s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.283465309 +0000 UTC m=+231.965168987" watchObservedRunningTime="2026-02-23 13:11:17.341017962 +0000 UTC m=+232.022721640" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.370604 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.371166 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.871144446 +0000 UTC m=+232.552848124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.401447 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lzbwt" podStartSLOduration=8.401417514 podStartE2EDuration="8.401417514s" podCreationTimestamp="2026-02-23 13:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.362678935 +0000 UTC m=+232.044382633" watchObservedRunningTime="2026-02-23 13:11:17.401417514 +0000 UTC m=+232.083121192" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.453668 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" podStartSLOduration=175.453652112 podStartE2EDuration="2m55.453652112s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.403771028 +0000 UTC m=+232.085474706" watchObservedRunningTime="2026-02-23 13:11:17.453652112 +0000 UTC m=+232.135355780" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.473345 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.473679 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:17.973666839 +0000 UTC m=+232.655370517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.547491 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9qcnx" podStartSLOduration=175.547475537 podStartE2EDuration="2m55.547475537s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.49672706 +0000 UTC m=+232.178430738" watchObservedRunningTime="2026-02-23 13:11:17.547475537 +0000 UTC m=+232.229179215" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.548322 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" podStartSLOduration=175.54831434 podStartE2EDuration="2m55.54831434s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.545640367 +0000 UTC m=+232.227344055" watchObservedRunningTime="2026-02-23 13:11:17.54831434 +0000 UTC m=+232.230018018" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.553314 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.574440 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.574833 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.074817935 +0000 UTC m=+232.756521613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.645719 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cwfqj" podStartSLOduration=175.645704943 podStartE2EDuration="2m55.645704943s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.614665314 +0000 UTC m=+232.296368992" watchObservedRunningTime="2026-02-23 13:11:17.645704943 +0000 UTC m=+232.327408611" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.649873 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fcq25"] Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.650066 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" podUID="4c4b7002-d97c-47bf-8de7-1361bcedc079" containerName="controller-manager" containerID="cri-o://f153bb346ad2b8755ca03dd04f67d2779cf210b115433afe205bd5498666bc2c" gracePeriod=30 Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.675842 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.676201 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.176187346 +0000 UTC m=+232.857891024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.683243 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:17 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:17 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:17 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.683306 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.729924 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" podStartSLOduration=175.729906994 podStartE2EDuration="2m55.729906994s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:17.723681204 +0000 UTC m=+232.405384892" watchObservedRunningTime="2026-02-23 13:11:17.729906994 +0000 UTC m=+232.411610662" Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.776821 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.777191 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.277176367 +0000 UTC m=+232.958880035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.877639 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.877970 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.377954102 +0000 UTC m=+233.059657780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.891617 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k"] Feb 23 13:11:17 crc kubenswrapper[4851]: I0223 13:11:17.980779 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:17 crc kubenswrapper[4851]: E0223 13:11:17.981196 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.481182154 +0000 UTC m=+233.162885832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.083288 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.084314 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.584302304 +0000 UTC m=+233.266005972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.184152 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.184479 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.684457122 +0000 UTC m=+233.366160800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.236871 4851 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5f58w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.236924 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" podUID="14bfc49a-35b3-4046-8ea4-ad199864f42a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.237158 4851 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-f2568 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.237281 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" podUID="1b0d549d-439b-4011-8ea1-47e808a3b715" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.262175 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c4b7002-d97c-47bf-8de7-1361bcedc079" containerID="f153bb346ad2b8755ca03dd04f67d2779cf210b115433afe205bd5498666bc2c" exitCode=0 Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.262285 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" event={"ID":"4c4b7002-d97c-47bf-8de7-1361bcedc079","Type":"ContainerDied","Data":"f153bb346ad2b8755ca03dd04f67d2779cf210b115433afe205bd5498666bc2c"} Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.285061 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.285391 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.785378501 +0000 UTC m=+233.467082179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.293774 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.305309 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" event={"ID":"e9ab5dc2-f915-4c20-9c85-2380f944bd44","Type":"ContainerStarted","Data":"dbbf13bfdb3d4af58eb453705b392e33e42833f80da5cdb9ce558ed2fca13235"} Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.318748 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" event={"ID":"08e6c5b9-012d-4b1e-9704-b3cd1368a281","Type":"ContainerStarted","Data":"f7a8f74b6113f8defb0fd4987da9db9186c2b377f5370ee9b11b559d79332d1c"} Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.335052 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-scbph" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.335772 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.353122 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gjsl5" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.381233 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5f58w" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.386429 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.386769 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.886745393 +0000 UTC m=+233.568449071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.390991 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.392509 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:18.89248405 +0000 UTC m=+233.574187728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.399615 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-f2568" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.511153 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.513028 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:19.013005655 +0000 UTC m=+233.694709323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.616953 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.617894 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:19.117882303 +0000 UTC m=+233.799585981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.629686 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.678130 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:18 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:18 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:18 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.678516 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.718112 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-proxy-ca-bundles\") pod \"4c4b7002-d97c-47bf-8de7-1361bcedc079\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.718398 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-client-ca\") pod \"4c4b7002-d97c-47bf-8de7-1361bcedc079\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.718512 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-config\") pod \"4c4b7002-d97c-47bf-8de7-1361bcedc079\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.718658 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4b7002-d97c-47bf-8de7-1361bcedc079-serving-cert\") pod \"4c4b7002-d97c-47bf-8de7-1361bcedc079\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.718862 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.718980 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsf6w\" (UniqueName: \"kubernetes.io/projected/4c4b7002-d97c-47bf-8de7-1361bcedc079-kube-api-access-lsf6w\") pod \"4c4b7002-d97c-47bf-8de7-1361bcedc079\" (UID: \"4c4b7002-d97c-47bf-8de7-1361bcedc079\") " Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.719276 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4c4b7002-d97c-47bf-8de7-1361bcedc079" (UID: "4c4b7002-d97c-47bf-8de7-1361bcedc079"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.719694 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-config" (OuterVolumeSpecName: "config") pod "4c4b7002-d97c-47bf-8de7-1361bcedc079" (UID: "4c4b7002-d97c-47bf-8de7-1361bcedc079"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.719935 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-client-ca" (OuterVolumeSpecName: "client-ca") pod "4c4b7002-d97c-47bf-8de7-1361bcedc079" (UID: "4c4b7002-d97c-47bf-8de7-1361bcedc079"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.720355 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:19.220321104 +0000 UTC m=+233.902024782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.726841 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c4b7002-d97c-47bf-8de7-1361bcedc079-kube-api-access-lsf6w" (OuterVolumeSpecName: "kube-api-access-lsf6w") pod "4c4b7002-d97c-47bf-8de7-1361bcedc079" (UID: "4c4b7002-d97c-47bf-8de7-1361bcedc079"). InnerVolumeSpecName "kube-api-access-lsf6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.727498 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c4b7002-d97c-47bf-8de7-1361bcedc079-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c4b7002-d97c-47bf-8de7-1361bcedc079" (UID: "4c4b7002-d97c-47bf-8de7-1361bcedc079"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.749970 4851 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.820486 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.820587 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsf6w\" (UniqueName: \"kubernetes.io/projected/4c4b7002-d97c-47bf-8de7-1361bcedc079-kube-api-access-lsf6w\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.820636 4851 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.820648 4851 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.820657 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4b7002-d97c-47bf-8de7-1361bcedc079-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.820667 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4b7002-d97c-47bf-8de7-1361bcedc079-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.820854 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:19.320836852 +0000 UTC m=+234.002540530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:18 crc kubenswrapper[4851]: I0223 13:11:18.921844 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:18 crc kubenswrapper[4851]: E0223 13:11:18.922049 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:19.422022908 +0000 UTC m=+234.103726586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.023815 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:19 crc kubenswrapper[4851]: E0223 13:11:19.024215 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:19.524195612 +0000 UTC m=+234.205899360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.125024 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:19 crc kubenswrapper[4851]: E0223 13:11:19.125237 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 13:11:19.625210894 +0000 UTC m=+234.306914572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.125370 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:19 crc kubenswrapper[4851]: E0223 13:11:19.125683 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 13:11:19.625667626 +0000 UTC m=+234.307371304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-z9hs4" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.176139 4851 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-23T13:11:18.749999755Z","Handler":null,"Name":""} Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.181099 4851 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.181144 4851 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.226736 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.230170 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.324865 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" event={"ID":"e9ab5dc2-f915-4c20-9c85-2380f944bd44","Type":"ContainerStarted","Data":"8f9bc2c859f826f824b8bb0670c128bb953c787bc8a68bfa76ec118387b3339f"} Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.327076 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" event={"ID":"08e6c5b9-012d-4b1e-9704-b3cd1368a281","Type":"ContainerStarted","Data":"3012d3e5c52de902a1573f3d6e6d2c3e0cabf1bf2d3ae0df3e2832cb3a705088"} Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.327121 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" event={"ID":"08e6c5b9-012d-4b1e-9704-b3cd1368a281","Type":"ContainerStarted","Data":"f7e3ec402d60e7988ae40e7812d385c1336718bd56515b011d887032ba125956"} Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.327792 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.328425 4851 generic.go:334] "Generic (PLEG): container finished" podID="3c81e69e-6d53-4016-b87e-bdc816dc0365" containerID="342045dc568ecac4114c95ec0358ef99796f41d82855f90db7b3e80db51cc128" exitCode=0 Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.328484 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" event={"ID":"3c81e69e-6d53-4016-b87e-bdc816dc0365","Type":"ContainerDied","Data":"342045dc568ecac4114c95ec0358ef99796f41d82855f90db7b3e80db51cc128"} Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.330142 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.330174 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fcq25" event={"ID":"4c4b7002-d97c-47bf-8de7-1361bcedc079","Type":"ContainerDied","Data":"524122f8c57c937475283f2293f7b1ef9e9a35f778f9b2f7701536395063b262"} Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.330230 4851 scope.go:117] "RemoveContainer" containerID="f153bb346ad2b8755ca03dd04f67d2779cf210b115433afe205bd5498666bc2c" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.330757 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" podUID="328f5d3c-6337-407d-a812-034d9d26069c" containerName="route-controller-manager" containerID="cri-o://8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2" gracePeriod=30 Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.335650 4851 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.335703 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.355078 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" podStartSLOduration=178.355064398 podStartE2EDuration="2m58.355064398s" podCreationTimestamp="2026-02-23 13:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:19.354671248 +0000 UTC m=+234.036374926" watchObservedRunningTime="2026-02-23 13:11:19.355064398 +0000 UTC m=+234.036768076" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.381119 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-z9hs4\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.384345 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xnbsq" podStartSLOduration=10.384314328 podStartE2EDuration="10.384314328s" podCreationTimestamp="2026-02-23 13:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:19.382094807 +0000 UTC m=+234.063798505" watchObservedRunningTime="2026-02-23 13:11:19.384314328 +0000 UTC m=+234.066018006" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.393852 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fcq25"] Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.396077 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fcq25"] Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.444481 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.567387 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dpz2p"] Feb 23 13:11:19 crc kubenswrapper[4851]: E0223 13:11:19.567965 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c4b7002-d97c-47bf-8de7-1361bcedc079" containerName="controller-manager" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.567985 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c4b7002-d97c-47bf-8de7-1361bcedc079" containerName="controller-manager" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.568291 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c4b7002-d97c-47bf-8de7-1361bcedc079" containerName="controller-manager" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.571637 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.575849 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg"] Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.576520 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.579759 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.587204 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.588482 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.588490 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.588566 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.588706 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.590595 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpz2p"] Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.593048 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.594904 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg"] Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.598843 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.632320 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-proxy-ca-bundles\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.632401 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-catalog-content\") pod \"certified-operators-dpz2p\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.632424 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-utilities\") pod \"certified-operators-dpz2p\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.632444 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbg9v\" (UniqueName: \"kubernetes.io/projected/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-kube-api-access-tbg9v\") pod \"certified-operators-dpz2p\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.632467 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6kkr\" (UniqueName: \"kubernetes.io/projected/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-kube-api-access-w6kkr\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.632499 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-config\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.632536 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-client-ca\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.632575 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-serving-cert\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.679404 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:19 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:19 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:19 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.679744 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.689757 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.717511 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9hs4"] Feb 23 13:11:19 crc kubenswrapper[4851]: W0223 13:11:19.724841 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod598d1af4_7f4c_4815_8b0c_bd364fcc191d.slice/crio-122abd91a31ccf77db4a48cf878a4573ec5c006ee2ec42edaef03fc543ea5597 WatchSource:0}: Error finding container 122abd91a31ccf77db4a48cf878a4573ec5c006ee2ec42edaef03fc543ea5597: Status 404 returned error can't find the container with id 122abd91a31ccf77db4a48cf878a4573ec5c006ee2ec42edaef03fc543ea5597 Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733177 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/328f5d3c-6337-407d-a812-034d9d26069c-serving-cert\") pod \"328f5d3c-6337-407d-a812-034d9d26069c\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733323 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-client-ca\") pod \"328f5d3c-6337-407d-a812-034d9d26069c\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733456 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg6vj\" (UniqueName: \"kubernetes.io/projected/328f5d3c-6337-407d-a812-034d9d26069c-kube-api-access-lg6vj\") pod \"328f5d3c-6337-407d-a812-034d9d26069c\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733486 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-config\") pod \"328f5d3c-6337-407d-a812-034d9d26069c\" (UID: \"328f5d3c-6337-407d-a812-034d9d26069c\") " Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733567 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-proxy-ca-bundles\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733626 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-catalog-content\") pod \"certified-operators-dpz2p\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733653 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-utilities\") pod \"certified-operators-dpz2p\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733676 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbg9v\" (UniqueName: \"kubernetes.io/projected/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-kube-api-access-tbg9v\") pod \"certified-operators-dpz2p\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733695 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6kkr\" (UniqueName: \"kubernetes.io/projected/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-kube-api-access-w6kkr\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733738 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-config\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733779 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-client-ca\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.733821 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-serving-cert\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.749018 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-client-ca" (OuterVolumeSpecName: "client-ca") pod "328f5d3c-6337-407d-a812-034d9d26069c" (UID: "328f5d3c-6337-407d-a812-034d9d26069c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.749422 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-config" (OuterVolumeSpecName: "config") pod "328f5d3c-6337-407d-a812-034d9d26069c" (UID: "328f5d3c-6337-407d-a812-034d9d26069c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.750284 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-proxy-ca-bundles\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.750622 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-catalog-content\") pod \"certified-operators-dpz2p\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.750852 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-utilities\") pod \"certified-operators-dpz2p\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.753078 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-config\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.754220 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-client-ca\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.761402 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328f5d3c-6337-407d-a812-034d9d26069c-kube-api-access-lg6vj" (OuterVolumeSpecName: "kube-api-access-lg6vj") pod "328f5d3c-6337-407d-a812-034d9d26069c" (UID: "328f5d3c-6337-407d-a812-034d9d26069c"). InnerVolumeSpecName "kube-api-access-lg6vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.763833 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-serving-cert\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.765914 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/328f5d3c-6337-407d-a812-034d9d26069c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "328f5d3c-6337-407d-a812-034d9d26069c" (UID: "328f5d3c-6337-407d-a812-034d9d26069c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.768608 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6kkr\" (UniqueName: \"kubernetes.io/projected/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-kube-api-access-w6kkr\") pod \"controller-manager-7fd9c5955b-mkjkg\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.769068 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vvh57"] Feb 23 13:11:19 crc kubenswrapper[4851]: E0223 13:11:19.769503 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328f5d3c-6337-407d-a812-034d9d26069c" containerName="route-controller-manager" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.769520 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="328f5d3c-6337-407d-a812-034d9d26069c" containerName="route-controller-manager" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.769712 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="328f5d3c-6337-407d-a812-034d9d26069c" containerName="route-controller-manager" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.773054 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.774713 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvh57"] Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.781884 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.785157 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbg9v\" (UniqueName: \"kubernetes.io/projected/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-kube-api-access-tbg9v\") pod \"certified-operators-dpz2p\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.847700 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8r7w\" (UniqueName: \"kubernetes.io/projected/4e65eda3-0eae-4672-9f18-c87148fcc449-kube-api-access-v8r7w\") pod \"community-operators-vvh57\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.847825 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-utilities\") pod \"community-operators-vvh57\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.847873 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-catalog-content\") pod \"community-operators-vvh57\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.847922 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/328f5d3c-6337-407d-a812-034d9d26069c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.847954 4851 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.847965 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg6vj\" (UniqueName: \"kubernetes.io/projected/328f5d3c-6337-407d-a812-034d9d26069c-kube-api-access-lg6vj\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.847978 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/328f5d3c-6337-407d-a812-034d9d26069c-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.921508 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.939207 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.947907 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lsgj2"] Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.949031 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.949388 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-utilities\") pod \"community-operators-vvh57\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.949501 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-catalog-content\") pod \"community-operators-vvh57\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.949593 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8r7w\" (UniqueName: \"kubernetes.io/projected/4e65eda3-0eae-4672-9f18-c87148fcc449-kube-api-access-v8r7w\") pod \"community-operators-vvh57\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.950847 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-utilities\") pod \"community-operators-vvh57\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.955574 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-catalog-content\") pod \"community-operators-vvh57\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.983688 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8r7w\" (UniqueName: \"kubernetes.io/projected/4e65eda3-0eae-4672-9f18-c87148fcc449-kube-api-access-v8r7w\") pod \"community-operators-vvh57\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.984292 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c4b7002-d97c-47bf-8de7-1361bcedc079" path="/var/lib/kubelet/pods/4c4b7002-d97c-47bf-8de7-1361bcedc079/volumes" Feb 23 13:11:19 crc kubenswrapper[4851]: I0223 13:11:19.985211 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.003547 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsgj2"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.052882 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhm4z\" (UniqueName: \"kubernetes.io/projected/952fbf0b-c4b2-47ab-8897-0bae64960c3d-kube-api-access-xhm4z\") pod \"certified-operators-lsgj2\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.052932 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-utilities\") pod \"certified-operators-lsgj2\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.052946 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-catalog-content\") pod \"certified-operators-lsgj2\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.111609 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.150908 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4z5sm"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.151822 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.158485 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhm4z\" (UniqueName: \"kubernetes.io/projected/952fbf0b-c4b2-47ab-8897-0bae64960c3d-kube-api-access-xhm4z\") pod \"certified-operators-lsgj2\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.158516 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-utilities\") pod \"certified-operators-lsgj2\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.158533 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-catalog-content\") pod \"certified-operators-lsgj2\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.159124 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-catalog-content\") pod \"certified-operators-lsgj2\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.159531 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-utilities\") pod \"certified-operators-lsgj2\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.168130 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z5sm"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.175617 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.176191 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.195971 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhm4z\" (UniqueName: \"kubernetes.io/projected/952fbf0b-c4b2-47ab-8897-0bae64960c3d-kube-api-access-xhm4z\") pod \"certified-operators-lsgj2\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.193323 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.260010 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-catalog-content\") pod \"community-operators-4z5sm\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.260086 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-utilities\") pod \"community-operators-4z5sm\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.260123 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-serving-cert\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.260150 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztj2r\" (UniqueName: \"kubernetes.io/projected/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-kube-api-access-ztj2r\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.260173 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-client-ca\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.260195 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-config\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.260256 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbftk\" (UniqueName: \"kubernetes.io/projected/1a0cc846-351c-4e97-a412-82e4a82863bd-kube-api-access-jbftk\") pod \"community-operators-4z5sm\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.267177 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.299499 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dpz2p"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.342382 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpz2p" event={"ID":"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7","Type":"ContainerStarted","Data":"393e919b6d1e58c232cbf9098dad2ae7ab4097ef769f619a04fa5295f02b1799"} Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.343700 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" event={"ID":"598d1af4-7f4c-4815-8b0c-bd364fcc191d","Type":"ContainerStarted","Data":"66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce"} Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.343721 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" event={"ID":"598d1af4-7f4c-4815-8b0c-bd364fcc191d","Type":"ContainerStarted","Data":"122abd91a31ccf77db4a48cf878a4573ec5c006ee2ec42edaef03fc543ea5597"} Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.344463 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.346203 4851 generic.go:334] "Generic (PLEG): container finished" podID="328f5d3c-6337-407d-a812-034d9d26069c" containerID="8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2" exitCode=0 Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.347188 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.347213 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" event={"ID":"328f5d3c-6337-407d-a812-034d9d26069c","Type":"ContainerDied","Data":"8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2"} Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.347235 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k" event={"ID":"328f5d3c-6337-407d-a812-034d9d26069c","Type":"ContainerDied","Data":"73621a130c67afb601f20461f1678004692ea945f7e8dfc41b6305a8395ef557"} Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.347252 4851 scope.go:117] "RemoveContainer" containerID="8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.361213 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztj2r\" (UniqueName: \"kubernetes.io/projected/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-kube-api-access-ztj2r\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.361267 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-client-ca\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.361292 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-config\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.361310 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbftk\" (UniqueName: \"kubernetes.io/projected/1a0cc846-351c-4e97-a412-82e4a82863bd-kube-api-access-jbftk\") pod \"community-operators-4z5sm\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.361400 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-catalog-content\") pod \"community-operators-4z5sm\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.361448 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-utilities\") pod \"community-operators-4z5sm\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.361517 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-serving-cert\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.365917 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-catalog-content\") pod \"community-operators-4z5sm\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.367534 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-client-ca\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.368067 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-serving-cert\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.368480 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-utilities\") pod \"community-operators-4z5sm\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.379993 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-config\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.391597 4851 scope.go:117] "RemoveContainer" containerID="8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2" Feb 23 13:11:20 crc kubenswrapper[4851]: E0223 13:11:20.392339 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2\": container with ID starting with 8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2 not found: ID does not exist" containerID="8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.392389 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2"} err="failed to get container status \"8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2\": rpc error: code = NotFound desc = could not find container \"8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2\": container with ID starting with 8796f0dbb0f97dc8b17cd495ced9bd94e80df0541437b2727b84e704e422bad2 not found: ID does not exist" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.399012 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" podStartSLOduration=178.398992771 podStartE2EDuration="2m58.398992771s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:20.381113852 +0000 UTC m=+235.062817560" watchObservedRunningTime="2026-02-23 13:11:20.398992771 +0000 UTC m=+235.080696449" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.401560 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbftk\" (UniqueName: \"kubernetes.io/projected/1a0cc846-351c-4e97-a412-82e4a82863bd-kube-api-access-jbftk\") pod \"community-operators-4z5sm\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.402088 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.405726 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztj2r\" (UniqueName: \"kubernetes.io/projected/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-kube-api-access-ztj2r\") pod \"route-controller-manager-6d8967cb8b-4fw2c\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.416582 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.417047 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-cnt7k"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.455390 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvh57"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.491450 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.503635 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:20 crc kubenswrapper[4851]: W0223 13:11:20.517238 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e65eda3_0eae_4672_9f18_c87148fcc449.slice/crio-7e5cf67eb04d4d25d11ceade7557e97821ecc3982ba7391f3df97aa07b6ddf93 WatchSource:0}: Error finding container 7e5cf67eb04d4d25d11ceade7557e97821ecc3982ba7391f3df97aa07b6ddf93: Status 404 returned error can't find the container with id 7e5cf67eb04d4d25d11ceade7557e97821ecc3982ba7391f3df97aa07b6ddf93 Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.604192 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lsgj2"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.678396 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:20 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:20 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:20 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.678449 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.773076 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.788321 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z5sm"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.859307 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c"] Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.875614 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c81e69e-6d53-4016-b87e-bdc816dc0365-secret-volume\") pod \"3c81e69e-6d53-4016-b87e-bdc816dc0365\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.875667 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c81e69e-6d53-4016-b87e-bdc816dc0365-config-volume\") pod \"3c81e69e-6d53-4016-b87e-bdc816dc0365\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.875736 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwwsj\" (UniqueName: \"kubernetes.io/projected/3c81e69e-6d53-4016-b87e-bdc816dc0365-kube-api-access-lwwsj\") pod \"3c81e69e-6d53-4016-b87e-bdc816dc0365\" (UID: \"3c81e69e-6d53-4016-b87e-bdc816dc0365\") " Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.876763 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c81e69e-6d53-4016-b87e-bdc816dc0365-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c81e69e-6d53-4016-b87e-bdc816dc0365" (UID: "3c81e69e-6d53-4016-b87e-bdc816dc0365"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.881592 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c81e69e-6d53-4016-b87e-bdc816dc0365-kube-api-access-lwwsj" (OuterVolumeSpecName: "kube-api-access-lwwsj") pod "3c81e69e-6d53-4016-b87e-bdc816dc0365" (UID: "3c81e69e-6d53-4016-b87e-bdc816dc0365"). InnerVolumeSpecName "kube-api-access-lwwsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.882031 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81e69e-6d53-4016-b87e-bdc816dc0365-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c81e69e-6d53-4016-b87e-bdc816dc0365" (UID: "3c81e69e-6d53-4016-b87e-bdc816dc0365"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.977179 4851 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c81e69e-6d53-4016-b87e-bdc816dc0365-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.977414 4851 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c81e69e-6d53-4016-b87e-bdc816dc0365-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:20 crc kubenswrapper[4851]: I0223 13:11:20.977483 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwwsj\" (UniqueName: \"kubernetes.io/projected/3c81e69e-6d53-4016-b87e-bdc816dc0365-kube-api-access-lwwsj\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.353115 4851 generic.go:334] "Generic (PLEG): container finished" podID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerID="533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599" exitCode=0 Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.353289 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z5sm" event={"ID":"1a0cc846-351c-4e97-a412-82e4a82863bd","Type":"ContainerDied","Data":"533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.353493 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z5sm" event={"ID":"1a0cc846-351c-4e97-a412-82e4a82863bd","Type":"ContainerStarted","Data":"cbf6efef11e089aa90fb23c1a5eb67a5eb0eeb284803186f53fe70e59096a4c7"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.354910 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.355216 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" event={"ID":"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f","Type":"ContainerStarted","Data":"c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.355243 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" event={"ID":"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f","Type":"ContainerStarted","Data":"6977f908a43c813f5ff09ab8c95b064c4fb8dc04b9b4e4e71cb0a77a870651c5"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.355445 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.358515 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" event={"ID":"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e","Type":"ContainerStarted","Data":"55390cbac440050c7fb3e25fb93f18ba38162557803572f43326096e06916761"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.358544 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" event={"ID":"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e","Type":"ContainerStarted","Data":"a1da389fe5aa8e6a291184cc6f0d83af38e835cd75e8c205ba30e356b365610d"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.358738 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.361030 4851 generic.go:334] "Generic (PLEG): container finished" podID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerID="efb59b3ca00f442903861acb737b1d4fcfdfdcfbf5a145fd29cb250125ec7d71" exitCode=0 Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.361100 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpz2p" event={"ID":"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7","Type":"ContainerDied","Data":"efb59b3ca00f442903861acb737b1d4fcfdfdcfbf5a145fd29cb250125ec7d71"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.362541 4851 generic.go:334] "Generic (PLEG): container finished" podID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerID="07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1" exitCode=0 Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.362572 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsgj2" event={"ID":"952fbf0b-c4b2-47ab-8897-0bae64960c3d","Type":"ContainerDied","Data":"07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.362585 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsgj2" event={"ID":"952fbf0b-c4b2-47ab-8897-0bae64960c3d","Type":"ContainerStarted","Data":"d8c3d37d6035ccd5590ad8661d60edf7476516a414aff2b6b6f51202cbbbceae"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.365148 4851 generic.go:334] "Generic (PLEG): container finished" podID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerID="30a6b2219ae6d912a3bb124761b503d19c3ce095d8d1f74931550a1f0869e901" exitCode=0 Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.365207 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvh57" event={"ID":"4e65eda3-0eae-4672-9f18-c87148fcc449","Type":"ContainerDied","Data":"30a6b2219ae6d912a3bb124761b503d19c3ce095d8d1f74931550a1f0869e901"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.365228 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvh57" event={"ID":"4e65eda3-0eae-4672-9f18-c87148fcc449","Type":"ContainerStarted","Data":"7e5cf67eb04d4d25d11ceade7557e97821ecc3982ba7391f3df97aa07b6ddf93"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.365853 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.367906 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" event={"ID":"3c81e69e-6d53-4016-b87e-bdc816dc0365","Type":"ContainerDied","Data":"90198e4021b53441d058efbe5fb8f26042d814621c5e22df926fecf0885c4248"} Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.367956 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90198e4021b53441d058efbe5fb8f26042d814621c5e22df926fecf0885c4248" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.368079 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.384710 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.404817 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" podStartSLOduration=3.40480223 podStartE2EDuration="3.40480223s" podCreationTimestamp="2026-02-23 13:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:21.403985808 +0000 UTC m=+236.085689496" watchObservedRunningTime="2026-02-23 13:11:21.40480223 +0000 UTC m=+236.086505908" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.436454 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" podStartSLOduration=1.436435325 podStartE2EDuration="1.436435325s" podCreationTimestamp="2026-02-23 13:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:21.433173826 +0000 UTC m=+236.114877504" watchObservedRunningTime="2026-02-23 13:11:21.436435325 +0000 UTC m=+236.118139003" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.678182 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:21 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:21 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:21 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.678256 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.743998 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6vbb9"] Feb 23 13:11:21 crc kubenswrapper[4851]: E0223 13:11:21.744217 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c81e69e-6d53-4016-b87e-bdc816dc0365" containerName="collect-profiles" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.744228 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c81e69e-6d53-4016-b87e-bdc816dc0365" containerName="collect-profiles" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.744318 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c81e69e-6d53-4016-b87e-bdc816dc0365" containerName="collect-profiles" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.744992 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.747229 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.790481 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vbb9"] Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.887765 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljn2m\" (UniqueName: \"kubernetes.io/projected/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-kube-api-access-ljn2m\") pod \"redhat-marketplace-6vbb9\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.887806 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-utilities\") pod \"redhat-marketplace-6vbb9\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.887983 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-catalog-content\") pod \"redhat-marketplace-6vbb9\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.983641 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328f5d3c-6337-407d-a812-034d9d26069c" path="/var/lib/kubelet/pods/328f5d3c-6337-407d-a812-034d9d26069c/volumes" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.989297 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljn2m\" (UniqueName: \"kubernetes.io/projected/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-kube-api-access-ljn2m\") pod \"redhat-marketplace-6vbb9\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.989433 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-utilities\") pod \"redhat-marketplace-6vbb9\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.989467 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-catalog-content\") pod \"redhat-marketplace-6vbb9\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.990398 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-catalog-content\") pod \"redhat-marketplace-6vbb9\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:21 crc kubenswrapper[4851]: I0223 13:11:21.990413 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-utilities\") pod \"redhat-marketplace-6vbb9\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.040388 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljn2m\" (UniqueName: \"kubernetes.io/projected/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-kube-api-access-ljn2m\") pod \"redhat-marketplace-6vbb9\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.059996 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.147572 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vmph8"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.157251 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.166174 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmph8"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.254463 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.258127 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.262731 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.281294 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.281810 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.285931 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.286175 4851 patch_prober.go:28] interesting pod/console-f9d7485db-x8scz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.286239 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x8scz" podUID="a6fe30bd-a140-4309-9156-52d361049059" containerName="console" probeResult="failure" output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.299102 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-utilities\") pod \"redhat-marketplace-vmph8\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.299182 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnhvp\" (UniqueName: \"kubernetes.io/projected/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-kube-api-access-xnhvp\") pod \"redhat-marketplace-vmph8\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.299246 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-catalog-content\") pod \"redhat-marketplace-vmph8\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.302805 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.364749 4851 patch_prober.go:28] interesting pod/downloads-7954f5f757-slxzr container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.364814 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-slxzr" podUID="58e43c54-4e65-4ca6-9a52-f79c58a072d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.365379 4851 patch_prober.go:28] interesting pod/downloads-7954f5f757-slxzr container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.365402 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-slxzr" podUID="58e43c54-4e65-4ca6-9a52-f79c58a072d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.383099 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vbb9"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.400650 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.400999 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-utilities\") pod \"redhat-marketplace-vmph8\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.401108 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnhvp\" (UniqueName: \"kubernetes.io/projected/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-kube-api-access-xnhvp\") pod \"redhat-marketplace-vmph8\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.401224 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-catalog-content\") pod \"redhat-marketplace-vmph8\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.401269 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.402425 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-catalog-content\") pod \"redhat-marketplace-vmph8\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.402661 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-utilities\") pod \"redhat-marketplace-vmph8\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.412038 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:11:22 crc kubenswrapper[4851]: W0223 13:11:22.421445 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e27ecc1_b0dc_4abb_aca6_58d1f5ff5199.slice/crio-a14e1184f0390aa93fe6ecfa33ce9e1a2316d5c3ccaa2ad0f114b9fb114e590f WatchSource:0}: Error finding container a14e1184f0390aa93fe6ecfa33ce9e1a2316d5c3ccaa2ad0f114b9fb114e590f: Status 404 returned error can't find the container with id a14e1184f0390aa93fe6ecfa33ce9e1a2316d5c3ccaa2ad0f114b9fb114e590f Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.425158 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnhvp\" (UniqueName: \"kubernetes.io/projected/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-kube-api-access-xnhvp\") pod \"redhat-marketplace-vmph8\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.483413 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.503215 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.504958 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.505094 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.524982 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.587520 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.674438 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.677636 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:22 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:22 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:22 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.677694 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.734450 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.742348 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.747450 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.748250 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.766994 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.775912 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mcn2z"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.777006 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.783358 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mcn2z"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.786412 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.798115 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmph8"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.818253 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c0872d-1c3a-4166-80dd-7f83182529d5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"75c0872d-1c3a-4166-80dd-7f83182529d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.818316 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-catalog-content\") pod \"redhat-operators-mcn2z\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.818571 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb98g\" (UniqueName: \"kubernetes.io/projected/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-kube-api-access-nb98g\") pod \"redhat-operators-mcn2z\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.818623 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c0872d-1c3a-4166-80dd-7f83182529d5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"75c0872d-1c3a-4166-80dd-7f83182529d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.818639 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-utilities\") pod \"redhat-operators-mcn2z\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.914630 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.914679 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.930876 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c0872d-1c3a-4166-80dd-7f83182529d5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"75c0872d-1c3a-4166-80dd-7f83182529d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.930934 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-catalog-content\") pod \"redhat-operators-mcn2z\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.931045 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb98g\" (UniqueName: \"kubernetes.io/projected/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-kube-api-access-nb98g\") pod \"redhat-operators-mcn2z\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.931065 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-utilities\") pod \"redhat-operators-mcn2z\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.931097 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c0872d-1c3a-4166-80dd-7f83182529d5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"75c0872d-1c3a-4166-80dd-7f83182529d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.932697 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.933107 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c0872d-1c3a-4166-80dd-7f83182529d5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"75c0872d-1c3a-4166-80dd-7f83182529d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.933485 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-catalog-content\") pod \"redhat-operators-mcn2z\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.933807 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-utilities\") pod \"redhat-operators-mcn2z\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.972783 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.977456 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c0872d-1c3a-4166-80dd-7f83182529d5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"75c0872d-1c3a-4166-80dd-7f83182529d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:22 crc kubenswrapper[4851]: I0223 13:11:22.977519 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb98g\" (UniqueName: \"kubernetes.io/projected/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-kube-api-access-nb98g\") pod \"redhat-operators-mcn2z\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.066734 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.101016 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.146727 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-788lc"] Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.147976 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.157638 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-788lc"] Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.237222 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-utilities\") pod \"redhat-operators-788lc\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.237260 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmq5m\" (UniqueName: \"kubernetes.io/projected/5be292af-08fb-47c0-8665-0e3880fc8b63-kube-api-access-bmq5m\") pod \"redhat-operators-788lc\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.237303 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-catalog-content\") pod \"redhat-operators-788lc\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.344239 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-utilities\") pod \"redhat-operators-788lc\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.345178 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-utilities\") pod \"redhat-operators-788lc\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.345257 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmq5m\" (UniqueName: \"kubernetes.io/projected/5be292af-08fb-47c0-8665-0e3880fc8b63-kube-api-access-bmq5m\") pod \"redhat-operators-788lc\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.345303 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-catalog-content\") pod \"redhat-operators-788lc\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.345586 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-catalog-content\") pod \"redhat-operators-788lc\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.351721 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.370693 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmq5m\" (UniqueName: \"kubernetes.io/projected/5be292af-08fb-47c0-8665-0e3880fc8b63-kube-api-access-bmq5m\") pod \"redhat-operators-788lc\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.409271 4851 generic.go:334] "Generic (PLEG): container finished" podID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerID="b51fa7f2acd4562d3e294fb895884a9372bdb998d72546cf2cdc3f71226d5c37" exitCode=0 Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.409453 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vbb9" event={"ID":"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199","Type":"ContainerDied","Data":"b51fa7f2acd4562d3e294fb895884a9372bdb998d72546cf2cdc3f71226d5c37"} Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.409485 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vbb9" event={"ID":"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199","Type":"ContainerStarted","Data":"a14e1184f0390aa93fe6ecfa33ce9e1a2316d5c3ccaa2ad0f114b9fb114e590f"} Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.415895 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"75c0872d-1c3a-4166-80dd-7f83182529d5","Type":"ContainerStarted","Data":"043f8f87e3bc7c5c9576a12b1e7f68e7037965f559db27cea92e8451b0ad14ad"} Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.417947 4851 generic.go:334] "Generic (PLEG): container finished" podID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerID="d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37" exitCode=0 Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.417998 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmph8" event={"ID":"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2","Type":"ContainerDied","Data":"d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37"} Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.418014 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmph8" event={"ID":"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2","Type":"ContainerStarted","Data":"291ad3f2703e86d777453612a8769f58f326acb7e2b651566c3bf6931fbe0f5c"} Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.420172 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1cbd37b0-d09b-4e6a-a29e-60d2d313d429","Type":"ContainerStarted","Data":"55036c4011debf85f9f82707b536deb65f2fa81dd0aa77105b16a92b1ba7861e"} Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.455387 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bgglr" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.460685 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mcn2z"] Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.487602 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.686382 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:23 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:23 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:23 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.686694 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:23 crc kubenswrapper[4851]: I0223 13:11:23.946006 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-788lc"] Feb 23 13:11:23 crc kubenswrapper[4851]: W0223 13:11:23.959052 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5be292af_08fb_47c0_8665_0e3880fc8b63.slice/crio-0ad34a3c40889187bc750274cce9dd1057d53dcba1a56f885994ae3477b18021 WatchSource:0}: Error finding container 0ad34a3c40889187bc750274cce9dd1057d53dcba1a56f885994ae3477b18021: Status 404 returned error can't find the container with id 0ad34a3c40889187bc750274cce9dd1057d53dcba1a56f885994ae3477b18021 Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.466266 4851 generic.go:334] "Generic (PLEG): container finished" podID="1cbd37b0-d09b-4e6a-a29e-60d2d313d429" containerID="487ffe90f625e0c53143fce561f9cdec3489e8fa28213e736d09a47dc69e5800" exitCode=0 Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.466372 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1cbd37b0-d09b-4e6a-a29e-60d2d313d429","Type":"ContainerDied","Data":"487ffe90f625e0c53143fce561f9cdec3489e8fa28213e736d09a47dc69e5800"} Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.473125 4851 generic.go:334] "Generic (PLEG): container finished" podID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerID="2f150688bc4bea6800eb3467ecd087f922a4c1fbb06d2b920e7e7fda00bccc59" exitCode=0 Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.473182 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcn2z" event={"ID":"01938f77-146f-4d3d-a8f6-d1d4673ad3d4","Type":"ContainerDied","Data":"2f150688bc4bea6800eb3467ecd087f922a4c1fbb06d2b920e7e7fda00bccc59"} Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.473201 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcn2z" event={"ID":"01938f77-146f-4d3d-a8f6-d1d4673ad3d4","Type":"ContainerStarted","Data":"868094ad0487832c2835c9baddb7f15e20cfe127ebbc462fb73c483457517986"} Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.477926 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"75c0872d-1c3a-4166-80dd-7f83182529d5","Type":"ContainerStarted","Data":"09f7f6746dd9c212dd9dd28a3808980ddbbac4f0409f1e60f71b576d62896914"} Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.483625 4851 generic.go:334] "Generic (PLEG): container finished" podID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerID="4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24" exitCode=0 Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.484477 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-788lc" event={"ID":"5be292af-08fb-47c0-8665-0e3880fc8b63","Type":"ContainerDied","Data":"4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24"} Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.484498 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-788lc" event={"ID":"5be292af-08fb-47c0-8665-0e3880fc8b63","Type":"ContainerStarted","Data":"0ad34a3c40889187bc750274cce9dd1057d53dcba1a56f885994ae3477b18021"} Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.517412 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.517391302 podStartE2EDuration="2.517391302s" podCreationTimestamp="2026-02-23 13:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:24.514812172 +0000 UTC m=+239.196515880" watchObservedRunningTime="2026-02-23 13:11:24.517391302 +0000 UTC m=+239.199094980" Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.677366 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:24 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:24 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:24 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:24 crc kubenswrapper[4851]: I0223 13:11:24.677427 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:25 crc kubenswrapper[4851]: I0223 13:11:25.502462 4851 generic.go:334] "Generic (PLEG): container finished" podID="75c0872d-1c3a-4166-80dd-7f83182529d5" containerID="09f7f6746dd9c212dd9dd28a3808980ddbbac4f0409f1e60f71b576d62896914" exitCode=0 Feb 23 13:11:25 crc kubenswrapper[4851]: I0223 13:11:25.502530 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"75c0872d-1c3a-4166-80dd-7f83182529d5","Type":"ContainerDied","Data":"09f7f6746dd9c212dd9dd28a3808980ddbbac4f0409f1e60f71b576d62896914"} Feb 23 13:11:25 crc kubenswrapper[4851]: I0223 13:11:25.682439 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:25 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:25 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:25 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:25 crc kubenswrapper[4851]: I0223 13:11:25.682725 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:25 crc kubenswrapper[4851]: I0223 13:11:25.803062 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:25 crc kubenswrapper[4851]: I0223 13:11:25.986051 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kube-api-access\") pod \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\" (UID: \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\") " Feb 23 13:11:25 crc kubenswrapper[4851]: I0223 13:11:25.986384 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kubelet-dir\") pod \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\" (UID: \"1cbd37b0-d09b-4e6a-a29e-60d2d313d429\") " Feb 23 13:11:25 crc kubenswrapper[4851]: I0223 13:11:25.986655 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1cbd37b0-d09b-4e6a-a29e-60d2d313d429" (UID: "1cbd37b0-d09b-4e6a-a29e-60d2d313d429"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:25 crc kubenswrapper[4851]: I0223 13:11:25.992343 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1cbd37b0-d09b-4e6a-a29e-60d2d313d429" (UID: "1cbd37b0-d09b-4e6a-a29e-60d2d313d429"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:11:26 crc kubenswrapper[4851]: I0223 13:11:26.087374 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:26 crc kubenswrapper[4851]: I0223 13:11:26.087413 4851 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1cbd37b0-d09b-4e6a-a29e-60d2d313d429-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:26 crc kubenswrapper[4851]: I0223 13:11:26.520720 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 13:11:26 crc kubenswrapper[4851]: I0223 13:11:26.521824 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1cbd37b0-d09b-4e6a-a29e-60d2d313d429","Type":"ContainerDied","Data":"55036c4011debf85f9f82707b536deb65f2fa81dd0aa77105b16a92b1ba7861e"} Feb 23 13:11:26 crc kubenswrapper[4851]: I0223 13:11:26.521870 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55036c4011debf85f9f82707b536deb65f2fa81dd0aa77105b16a92b1ba7861e" Feb 23 13:11:26 crc kubenswrapper[4851]: I0223 13:11:26.679118 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:26 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:26 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:26 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:26 crc kubenswrapper[4851]: I0223 13:11:26.679167 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:27 crc kubenswrapper[4851]: I0223 13:11:27.686815 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:27 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:27 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:27 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:27 crc kubenswrapper[4851]: I0223 13:11:27.687213 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:27 crc kubenswrapper[4851]: I0223 13:11:27.791771 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lzbwt" Feb 23 13:11:28 crc kubenswrapper[4851]: I0223 13:11:28.676964 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:28 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:28 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:28 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:28 crc kubenswrapper[4851]: I0223 13:11:28.677663 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:29 crc kubenswrapper[4851]: I0223 13:11:29.676886 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:29 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:29 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:29 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:29 crc kubenswrapper[4851]: I0223 13:11:29.676948 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:30 crc kubenswrapper[4851]: I0223 13:11:30.677111 4851 patch_prober.go:28] interesting pod/router-default-5444994796-k2qrn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 13:11:30 crc kubenswrapper[4851]: [-]has-synced failed: reason withheld Feb 23 13:11:30 crc kubenswrapper[4851]: [+]process-running ok Feb 23 13:11:30 crc kubenswrapper[4851]: healthz check failed Feb 23 13:11:30 crc kubenswrapper[4851]: I0223 13:11:30.677170 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k2qrn" podUID="ce808fab-8894-45af-86e6-5193f1de3201" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 13:11:31 crc kubenswrapper[4851]: I0223 13:11:31.678705 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:31 crc kubenswrapper[4851]: I0223 13:11:31.681460 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-k2qrn" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.281002 4851 patch_prober.go:28] interesting pod/console-f9d7485db-x8scz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.281066 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x8scz" podUID="a6fe30bd-a140-4309-9156-52d361049059" containerName="console" probeResult="failure" output="Get \"https://10.217.0.38:8443/health\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.372054 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-slxzr" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.493437 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.495348 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.517496 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88d393f-3f9d-4c95-b41b-10e998d5ca0f-metrics-certs\") pod \"network-metrics-daemon-jt4wg\" (UID: \"b88d393f-3f9d-4c95-b41b-10e998d5ca0f\") " pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.526411 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.589281 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"75c0872d-1c3a-4166-80dd-7f83182529d5","Type":"ContainerDied","Data":"043f8f87e3bc7c5c9576a12b1e7f68e7037965f559db27cea92e8451b0ad14ad"} Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.589343 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043f8f87e3bc7c5c9576a12b1e7f68e7037965f559db27cea92e8451b0ad14ad" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.589302 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.595010 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c0872d-1c3a-4166-80dd-7f83182529d5-kube-api-access\") pod \"75c0872d-1c3a-4166-80dd-7f83182529d5\" (UID: \"75c0872d-1c3a-4166-80dd-7f83182529d5\") " Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.595159 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c0872d-1c3a-4166-80dd-7f83182529d5-kubelet-dir\") pod \"75c0872d-1c3a-4166-80dd-7f83182529d5\" (UID: \"75c0872d-1c3a-4166-80dd-7f83182529d5\") " Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.595256 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75c0872d-1c3a-4166-80dd-7f83182529d5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "75c0872d-1c3a-4166-80dd-7f83182529d5" (UID: "75c0872d-1c3a-4166-80dd-7f83182529d5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.595559 4851 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75c0872d-1c3a-4166-80dd-7f83182529d5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.615759 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75c0872d-1c3a-4166-80dd-7f83182529d5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "75c0872d-1c3a-4166-80dd-7f83182529d5" (UID: "75c0872d-1c3a-4166-80dd-7f83182529d5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.697547 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75c0872d-1c3a-4166-80dd-7f83182529d5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.714378 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 13:11:32 crc kubenswrapper[4851]: I0223 13:11:32.722940 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jt4wg" Feb 23 13:11:39 crc kubenswrapper[4851]: I0223 13:11:39.449407 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:11:41 crc kubenswrapper[4851]: I0223 13:11:41.924988 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:11:41 crc kubenswrapper[4851]: I0223 13:11:41.925082 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:11:42 crc kubenswrapper[4851]: I0223 13:11:42.284942 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:42 crc kubenswrapper[4851]: I0223 13:11:42.288726 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:11:46 crc kubenswrapper[4851]: E0223 13:11:46.381785 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 23 13:11:46 crc kubenswrapper[4851]: E0223 13:11:46.382023 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljn2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6vbb9_openshift-marketplace(5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 13:11:46 crc kubenswrapper[4851]: E0223 13:11:46.383512 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-6vbb9" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" Feb 23 13:11:46 crc kubenswrapper[4851]: E0223 13:11:46.816620 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6vbb9" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" Feb 23 13:11:46 crc kubenswrapper[4851]: E0223 13:11:46.898445 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 23 13:11:46 crc kubenswrapper[4851]: E0223 13:11:46.898589 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnhvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vmph8_openshift-marketplace(bff4c5c5-1d60-44d2-abf6-99c7ae6883d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 13:11:46 crc kubenswrapper[4851]: E0223 13:11:46.899916 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vmph8" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.178088 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vmph8" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.200155 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.200302 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nb98g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mcn2z_openshift-marketplace(01938f77-146f-4d3d-a8f6-d1d4673ad3d4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.202444 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mcn2z" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.288681 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.288859 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbg9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dpz2p_openshift-marketplace(e618f7f4-c1f6-40cd-aa78-e0f711acd1b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.290083 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dpz2p" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.363425 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.363578 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmq5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-788lc_openshift-marketplace(5be292af-08fb-47c0-8665-0e3880fc8b63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 13:11:50 crc kubenswrapper[4851]: E0223 13:11:50.364761 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-788lc" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.425553 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dpz2p" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.425583 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mcn2z" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.426824 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-788lc" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.499343 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.499752 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8r7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vvh57_openshift-marketplace(4e65eda3-0eae-4672-9f18-c87148fcc449): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.501201 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vvh57" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.524225 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.524437 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbftk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4z5sm_openshift-marketplace(1a0cc846-351c-4e97-a412-82e4a82863bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.526408 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4z5sm" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" Feb 23 13:11:51 crc kubenswrapper[4851]: I0223 13:11:51.691230 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsgj2" event={"ID":"952fbf0b-c4b2-47ab-8897-0bae64960c3d","Type":"ContainerStarted","Data":"619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3"} Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.693295 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vvh57" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" Feb 23 13:11:51 crc kubenswrapper[4851]: E0223 13:11:51.693582 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4z5sm" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" Feb 23 13:11:51 crc kubenswrapper[4851]: I0223 13:11:51.832137 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jt4wg"] Feb 23 13:11:51 crc kubenswrapper[4851]: W0223 13:11:51.848418 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88d393f_3f9d_4c95_b41b_10e998d5ca0f.slice/crio-eaa7bedde0238a77065a1a4fc67bf7e8d11bcba9494088a7b815b25733e2d5a4 WatchSource:0}: Error finding container eaa7bedde0238a77065a1a4fc67bf7e8d11bcba9494088a7b815b25733e2d5a4: Status 404 returned error can't find the container with id eaa7bedde0238a77065a1a4fc67bf7e8d11bcba9494088a7b815b25733e2d5a4 Feb 23 13:11:52 crc kubenswrapper[4851]: I0223 13:11:52.697024 4851 generic.go:334] "Generic (PLEG): container finished" podID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerID="619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3" exitCode=0 Feb 23 13:11:52 crc kubenswrapper[4851]: I0223 13:11:52.697134 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsgj2" event={"ID":"952fbf0b-c4b2-47ab-8897-0bae64960c3d","Type":"ContainerDied","Data":"619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3"} Feb 23 13:11:52 crc kubenswrapper[4851]: I0223 13:11:52.699202 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" event={"ID":"b88d393f-3f9d-4c95-b41b-10e998d5ca0f","Type":"ContainerStarted","Data":"bfd2adc0ce0840c350f18d3469b719dcf593425dec59a62be6117acc4ea2b1a8"} Feb 23 13:11:52 crc kubenswrapper[4851]: I0223 13:11:52.699230 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" event={"ID":"b88d393f-3f9d-4c95-b41b-10e998d5ca0f","Type":"ContainerStarted","Data":"0c425dce1da842a288964819614f073fcbe7b1007ebe023a390a51a2e047b048"} Feb 23 13:11:52 crc kubenswrapper[4851]: I0223 13:11:52.699240 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jt4wg" event={"ID":"b88d393f-3f9d-4c95-b41b-10e998d5ca0f","Type":"ContainerStarted","Data":"eaa7bedde0238a77065a1a4fc67bf7e8d11bcba9494088a7b815b25733e2d5a4"} Feb 23 13:11:52 crc kubenswrapper[4851]: I0223 13:11:52.728937 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jt4wg" podStartSLOduration=210.728921118 podStartE2EDuration="3m30.728921118s" podCreationTimestamp="2026-02-23 13:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:52.728736583 +0000 UTC m=+267.410440281" watchObservedRunningTime="2026-02-23 13:11:52.728921118 +0000 UTC m=+267.410624796" Feb 23 13:11:53 crc kubenswrapper[4851]: I0223 13:11:53.050651 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zgbsx" Feb 23 13:11:53 crc kubenswrapper[4851]: I0223 13:11:53.710575 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsgj2" event={"ID":"952fbf0b-c4b2-47ab-8897-0bae64960c3d","Type":"ContainerStarted","Data":"2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f"} Feb 23 13:11:53 crc kubenswrapper[4851]: I0223 13:11:53.741285 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lsgj2" podStartSLOduration=3.015707745 podStartE2EDuration="34.741245424s" podCreationTimestamp="2026-02-23 13:11:19 +0000 UTC" firstStartedPulling="2026-02-23 13:11:21.36382985 +0000 UTC m=+236.045533528" lastFinishedPulling="2026-02-23 13:11:53.089367529 +0000 UTC m=+267.771071207" observedRunningTime="2026-02-23 13:11:53.738773345 +0000 UTC m=+268.420477033" watchObservedRunningTime="2026-02-23 13:11:53.741245424 +0000 UTC m=+268.422949142" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.139573 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 13:11:58 crc kubenswrapper[4851]: E0223 13:11:58.140880 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbd37b0-d09b-4e6a-a29e-60d2d313d429" containerName="pruner" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.140900 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbd37b0-d09b-4e6a-a29e-60d2d313d429" containerName="pruner" Feb 23 13:11:58 crc kubenswrapper[4851]: E0223 13:11:58.140927 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75c0872d-1c3a-4166-80dd-7f83182529d5" containerName="pruner" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.140934 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="75c0872d-1c3a-4166-80dd-7f83182529d5" containerName="pruner" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.141204 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="75c0872d-1c3a-4166-80dd-7f83182529d5" containerName="pruner" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.141245 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cbd37b0-d09b-4e6a-a29e-60d2d313d429" containerName="pruner" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.141976 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.145903 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.146184 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.153412 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.188058 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.188289 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.289643 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.289712 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.289923 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.312294 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:11:58 crc kubenswrapper[4851]: I0223 13:11:58.493814 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:11:59 crc kubenswrapper[4851]: I0223 13:11:59.002384 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 13:11:59 crc kubenswrapper[4851]: I0223 13:11:59.751746 4851 generic.go:334] "Generic (PLEG): container finished" podID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerID="11a055a6ede8eff70c8c0984f4b14ed27e6b4d5b7369ba6e16e9f0fd07e0d546" exitCode=0 Feb 23 13:11:59 crc kubenswrapper[4851]: I0223 13:11:59.751808 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vbb9" event={"ID":"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199","Type":"ContainerDied","Data":"11a055a6ede8eff70c8c0984f4b14ed27e6b4d5b7369ba6e16e9f0fd07e0d546"} Feb 23 13:11:59 crc kubenswrapper[4851]: I0223 13:11:59.753462 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b7875ecc-93a5-454a-af0d-0fc4e7926faf","Type":"ContainerStarted","Data":"8bdd188d3e47ab99ab20b6324fe499606ca1d19624400ab043f73a3fd2e23e09"} Feb 23 13:11:59 crc kubenswrapper[4851]: I0223 13:11:59.753511 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b7875ecc-93a5-454a-af0d-0fc4e7926faf","Type":"ContainerStarted","Data":"0c4c36b5673b75222b1ed6e30e30d5a68a02fa3b2b69e42b056159d6fdafc90e"} Feb 23 13:11:59 crc kubenswrapper[4851]: I0223 13:11:59.789070 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.789044684 podStartE2EDuration="1.789044684s" podCreationTimestamp="2026-02-23 13:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:11:59.785981308 +0000 UTC m=+274.467684986" watchObservedRunningTime="2026-02-23 13:11:59.789044684 +0000 UTC m=+274.470748422" Feb 23 13:12:00 crc kubenswrapper[4851]: I0223 13:12:00.268932 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:12:00 crc kubenswrapper[4851]: I0223 13:12:00.271011 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:12:00 crc kubenswrapper[4851]: I0223 13:12:00.428678 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:12:00 crc kubenswrapper[4851]: I0223 13:12:00.657575 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gllrl"] Feb 23 13:12:00 crc kubenswrapper[4851]: I0223 13:12:00.760315 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vbb9" event={"ID":"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199","Type":"ContainerStarted","Data":"a341665c7d6ce87497efa544fcf0e5fe2561647d374d6a19518daa7adfecc60e"} Feb 23 13:12:00 crc kubenswrapper[4851]: I0223 13:12:00.762650 4851 generic.go:334] "Generic (PLEG): container finished" podID="b7875ecc-93a5-454a-af0d-0fc4e7926faf" containerID="8bdd188d3e47ab99ab20b6324fe499606ca1d19624400ab043f73a3fd2e23e09" exitCode=0 Feb 23 13:12:00 crc kubenswrapper[4851]: I0223 13:12:00.763169 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b7875ecc-93a5-454a-af0d-0fc4e7926faf","Type":"ContainerDied","Data":"8bdd188d3e47ab99ab20b6324fe499606ca1d19624400ab043f73a3fd2e23e09"} Feb 23 13:12:00 crc kubenswrapper[4851]: I0223 13:12:00.780893 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6vbb9" podStartSLOduration=2.972361836 podStartE2EDuration="39.780875055s" podCreationTimestamp="2026-02-23 13:11:21 +0000 UTC" firstStartedPulling="2026-02-23 13:11:23.412582015 +0000 UTC m=+238.094285693" lastFinishedPulling="2026-02-23 13:12:00.221095234 +0000 UTC m=+274.902798912" observedRunningTime="2026-02-23 13:12:00.777952763 +0000 UTC m=+275.459656451" watchObservedRunningTime="2026-02-23 13:12:00.780875055 +0000 UTC m=+275.462578733" Feb 23 13:12:00 crc kubenswrapper[4851]: I0223 13:12:00.820059 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:12:01 crc kubenswrapper[4851]: I0223 13:12:01.350435 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsgj2"] Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.060714 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.060754 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.091722 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.107633 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.139422 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kubelet-dir\") pod \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\" (UID: \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\") " Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.139517 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kube-api-access\") pod \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\" (UID: \"b7875ecc-93a5-454a-af0d-0fc4e7926faf\") " Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.139553 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b7875ecc-93a5-454a-af0d-0fc4e7926faf" (UID: "b7875ecc-93a5-454a-af0d-0fc4e7926faf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.140374 4851 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.146614 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b7875ecc-93a5-454a-af0d-0fc4e7926faf" (UID: "b7875ecc-93a5-454a-af0d-0fc4e7926faf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.241674 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7875ecc-93a5-454a-af0d-0fc4e7926faf-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.772754 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"b7875ecc-93a5-454a-af0d-0fc4e7926faf","Type":"ContainerDied","Data":"0c4c36b5673b75222b1ed6e30e30d5a68a02fa3b2b69e42b056159d6fdafc90e"} Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.772791 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.772816 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c4c36b5673b75222b1ed6e30e30d5a68a02fa3b2b69e42b056159d6fdafc90e" Feb 23 13:12:02 crc kubenswrapper[4851]: I0223 13:12:02.772889 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lsgj2" podUID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerName="registry-server" containerID="cri-o://2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f" gracePeriod=2 Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.121600 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 13:12:03 crc kubenswrapper[4851]: E0223 13:12:03.122061 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7875ecc-93a5-454a-af0d-0fc4e7926faf" containerName="pruner" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.122074 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7875ecc-93a5-454a-af0d-0fc4e7926faf" containerName="pruner" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.122199 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7875ecc-93a5-454a-af0d-0fc4e7926faf" containerName="pruner" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.122561 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.125014 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.126238 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.129368 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.155061 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kube-api-access\") pod \"installer-9-crc\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.155103 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-var-lock\") pod \"installer-9-crc\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.155133 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.256691 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.256831 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kube-api-access\") pod \"installer-9-crc\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.256831 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.256856 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-var-lock\") pod \"installer-9-crc\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.256936 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-var-lock\") pod \"installer-9-crc\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.274083 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kube-api-access\") pod \"installer-9-crc\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.454056 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.673464 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.763202 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-catalog-content\") pod \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.763264 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhm4z\" (UniqueName: \"kubernetes.io/projected/952fbf0b-c4b2-47ab-8897-0bae64960c3d-kube-api-access-xhm4z\") pod \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.763363 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-utilities\") pod \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\" (UID: \"952fbf0b-c4b2-47ab-8897-0bae64960c3d\") " Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.764383 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-utilities" (OuterVolumeSpecName: "utilities") pod "952fbf0b-c4b2-47ab-8897-0bae64960c3d" (UID: "952fbf0b-c4b2-47ab-8897-0bae64960c3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.770852 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/952fbf0b-c4b2-47ab-8897-0bae64960c3d-kube-api-access-xhm4z" (OuterVolumeSpecName: "kube-api-access-xhm4z") pod "952fbf0b-c4b2-47ab-8897-0bae64960c3d" (UID: "952fbf0b-c4b2-47ab-8897-0bae64960c3d"). InnerVolumeSpecName "kube-api-access-xhm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.777780 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvh57" event={"ID":"4e65eda3-0eae-4672-9f18-c87148fcc449","Type":"ContainerStarted","Data":"669a3edb4607f8c2898150cffcc71a6644ec8fea7158eaa8083b939cc92e60ad"} Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.780402 4851 generic.go:334] "Generic (PLEG): container finished" podID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerID="ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30" exitCode=0 Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.780459 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmph8" event={"ID":"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2","Type":"ContainerDied","Data":"ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30"} Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.787587 4851 generic.go:334] "Generic (PLEG): container finished" podID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerID="2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f" exitCode=0 Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.787626 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lsgj2" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.787655 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsgj2" event={"ID":"952fbf0b-c4b2-47ab-8897-0bae64960c3d","Type":"ContainerDied","Data":"2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f"} Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.787721 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lsgj2" event={"ID":"952fbf0b-c4b2-47ab-8897-0bae64960c3d","Type":"ContainerDied","Data":"d8c3d37d6035ccd5590ad8661d60edf7476516a414aff2b6b6f51202cbbbceae"} Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.787745 4851 scope.go:117] "RemoveContainer" containerID="2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.805187 4851 scope.go:117] "RemoveContainer" containerID="619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.821830 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "952fbf0b-c4b2-47ab-8897-0bae64960c3d" (UID: "952fbf0b-c4b2-47ab-8897-0bae64960c3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.831266 4851 scope.go:117] "RemoveContainer" containerID="07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.848810 4851 scope.go:117] "RemoveContainer" containerID="2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f" Feb 23 13:12:03 crc kubenswrapper[4851]: E0223 13:12:03.849444 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f\": container with ID starting with 2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f not found: ID does not exist" containerID="2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.849486 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f"} err="failed to get container status \"2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f\": rpc error: code = NotFound desc = could not find container \"2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f\": container with ID starting with 2258c694ab3f9e587e540d96d37da11e4ecd7e7fb3cf43b3cf4118d94d2f610f not found: ID does not exist" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.849507 4851 scope.go:117] "RemoveContainer" containerID="619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3" Feb 23 13:12:03 crc kubenswrapper[4851]: E0223 13:12:03.849871 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3\": container with ID starting with 619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3 not found: ID does not exist" containerID="619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.849890 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3"} err="failed to get container status \"619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3\": rpc error: code = NotFound desc = could not find container \"619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3\": container with ID starting with 619edfc2f6dc7d4eda67995bb5f7daa52a10a6cbc13ab9768643ed8ea9abc1b3 not found: ID does not exist" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.849902 4851 scope.go:117] "RemoveContainer" containerID="07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1" Feb 23 13:12:03 crc kubenswrapper[4851]: E0223 13:12:03.850762 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1\": container with ID starting with 07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1 not found: ID does not exist" containerID="07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.850782 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1"} err="failed to get container status \"07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1\": rpc error: code = NotFound desc = could not find container \"07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1\": container with ID starting with 07a49833e4a6490cda3e4a10a772e4aac943b3a727b743a35a05a0c4e39a1be1 not found: ID does not exist" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.864960 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.864988 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/952fbf0b-c4b2-47ab-8897-0bae64960c3d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.865001 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhm4z\" (UniqueName: \"kubernetes.io/projected/952fbf0b-c4b2-47ab-8897-0bae64960c3d-kube-api-access-xhm4z\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:03 crc kubenswrapper[4851]: I0223 13:12:03.934490 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.109400 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lsgj2"] Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.111909 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lsgj2"] Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.794775 4851 generic.go:334] "Generic (PLEG): container finished" podID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerID="669a3edb4607f8c2898150cffcc71a6644ec8fea7158eaa8083b939cc92e60ad" exitCode=0 Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.794814 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvh57" event={"ID":"4e65eda3-0eae-4672-9f18-c87148fcc449","Type":"ContainerDied","Data":"669a3edb4607f8c2898150cffcc71a6644ec8fea7158eaa8083b939cc92e60ad"} Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.797878 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmph8" event={"ID":"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2","Type":"ContainerStarted","Data":"c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8"} Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.799882 4851 generic.go:334] "Generic (PLEG): container finished" podID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerID="b5bb26c3a3c7cfff20d49516d71047d484299223ffc441a98081cc3ef3909758" exitCode=0 Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.799944 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcn2z" event={"ID":"01938f77-146f-4d3d-a8f6-d1d4673ad3d4","Type":"ContainerDied","Data":"b5bb26c3a3c7cfff20d49516d71047d484299223ffc441a98081cc3ef3909758"} Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.801520 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0","Type":"ContainerStarted","Data":"5fa681700ce4eddb6221e61ce706f988004861a76cfe67f3a82d08137b81572a"} Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.801556 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0","Type":"ContainerStarted","Data":"dfe568dc765e0890dacb60510a051c3f32809344f81fc8bda60ab7885ff2d69d"} Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.847165 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.84714183 podStartE2EDuration="1.84714183s" podCreationTimestamp="2026-02-23 13:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:12:04.846394289 +0000 UTC m=+279.528097967" watchObservedRunningTime="2026-02-23 13:12:04.84714183 +0000 UTC m=+279.528845508" Feb 23 13:12:04 crc kubenswrapper[4851]: I0223 13:12:04.877312 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vmph8" podStartSLOduration=2.054275381 podStartE2EDuration="42.877293956s" podCreationTimestamp="2026-02-23 13:11:22 +0000 UTC" firstStartedPulling="2026-02-23 13:11:23.419159855 +0000 UTC m=+238.100863533" lastFinishedPulling="2026-02-23 13:12:04.24217844 +0000 UTC m=+278.923882108" observedRunningTime="2026-02-23 13:12:04.875984539 +0000 UTC m=+279.557688217" watchObservedRunningTime="2026-02-23 13:12:04.877293956 +0000 UTC m=+279.558997634" Feb 23 13:12:05 crc kubenswrapper[4851]: I0223 13:12:05.820344 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpz2p" event={"ID":"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7","Type":"ContainerStarted","Data":"1ed6484c3a6e12210ddbca5e5f268a54d492b55d6d563c0d8af968e473d583e1"} Feb 23 13:12:05 crc kubenswrapper[4851]: I0223 13:12:05.823080 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvh57" event={"ID":"4e65eda3-0eae-4672-9f18-c87148fcc449","Type":"ContainerStarted","Data":"0e412f72ce1da68caf5aedd67fd7c81eed4d2e40188ae60781d07ad5a28f6a25"} Feb 23 13:12:05 crc kubenswrapper[4851]: I0223 13:12:05.825717 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-788lc" event={"ID":"5be292af-08fb-47c0-8665-0e3880fc8b63","Type":"ContainerStarted","Data":"865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1"} Feb 23 13:12:05 crc kubenswrapper[4851]: I0223 13:12:05.827580 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcn2z" event={"ID":"01938f77-146f-4d3d-a8f6-d1d4673ad3d4","Type":"ContainerStarted","Data":"740c1621bb4d17314c3ec5c38ff740e9d6dbeeab38efb232e64648e040f56950"} Feb 23 13:12:05 crc kubenswrapper[4851]: I0223 13:12:05.855597 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vvh57" podStartSLOduration=3.014930391 podStartE2EDuration="46.855582938s" podCreationTimestamp="2026-02-23 13:11:19 +0000 UTC" firstStartedPulling="2026-02-23 13:11:21.367402877 +0000 UTC m=+236.049106545" lastFinishedPulling="2026-02-23 13:12:05.208055414 +0000 UTC m=+279.889759092" observedRunningTime="2026-02-23 13:12:05.855066594 +0000 UTC m=+280.536770272" watchObservedRunningTime="2026-02-23 13:12:05.855582938 +0000 UTC m=+280.537286616" Feb 23 13:12:05 crc kubenswrapper[4851]: I0223 13:12:05.974877 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" path="/var/lib/kubelet/pods/952fbf0b-c4b2-47ab-8897-0bae64960c3d/volumes" Feb 23 13:12:06 crc kubenswrapper[4851]: I0223 13:12:06.834008 4851 generic.go:334] "Generic (PLEG): container finished" podID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerID="865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1" exitCode=0 Feb 23 13:12:06 crc kubenswrapper[4851]: I0223 13:12:06.834077 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-788lc" event={"ID":"5be292af-08fb-47c0-8665-0e3880fc8b63","Type":"ContainerDied","Data":"865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1"} Feb 23 13:12:06 crc kubenswrapper[4851]: I0223 13:12:06.835477 4851 generic.go:334] "Generic (PLEG): container finished" podID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerID="1ed6484c3a6e12210ddbca5e5f268a54d492b55d6d563c0d8af968e473d583e1" exitCode=0 Feb 23 13:12:06 crc kubenswrapper[4851]: I0223 13:12:06.835514 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpz2p" event={"ID":"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7","Type":"ContainerDied","Data":"1ed6484c3a6e12210ddbca5e5f268a54d492b55d6d563c0d8af968e473d583e1"} Feb 23 13:12:06 crc kubenswrapper[4851]: I0223 13:12:06.852028 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mcn2z" podStartSLOduration=4.129593918 podStartE2EDuration="44.85201291s" podCreationTimestamp="2026-02-23 13:11:22 +0000 UTC" firstStartedPulling="2026-02-23 13:11:24.475382384 +0000 UTC m=+239.157086052" lastFinishedPulling="2026-02-23 13:12:05.197801366 +0000 UTC m=+279.879505044" observedRunningTime="2026-02-23 13:12:05.891350602 +0000 UTC m=+280.573054320" watchObservedRunningTime="2026-02-23 13:12:06.85201291 +0000 UTC m=+281.533716598" Feb 23 13:12:07 crc kubenswrapper[4851]: I0223 13:12:07.841619 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpz2p" event={"ID":"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7","Type":"ContainerStarted","Data":"b570034b7a043905cb826d937b5f0211d8f3661095b665ea6f58066190b82334"} Feb 23 13:12:07 crc kubenswrapper[4851]: I0223 13:12:07.843213 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z5sm" event={"ID":"1a0cc846-351c-4e97-a412-82e4a82863bd","Type":"ContainerStarted","Data":"ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0"} Feb 23 13:12:07 crc kubenswrapper[4851]: I0223 13:12:07.845163 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-788lc" event={"ID":"5be292af-08fb-47c0-8665-0e3880fc8b63","Type":"ContainerStarted","Data":"18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471"} Feb 23 13:12:07 crc kubenswrapper[4851]: I0223 13:12:07.859780 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dpz2p" podStartSLOduration=2.976053964 podStartE2EDuration="48.859763218s" podCreationTimestamp="2026-02-23 13:11:19 +0000 UTC" firstStartedPulling="2026-02-23 13:11:21.363207303 +0000 UTC m=+236.044910981" lastFinishedPulling="2026-02-23 13:12:07.246916557 +0000 UTC m=+281.928620235" observedRunningTime="2026-02-23 13:12:07.85662002 +0000 UTC m=+282.538323698" watchObservedRunningTime="2026-02-23 13:12:07.859763218 +0000 UTC m=+282.541466886" Feb 23 13:12:07 crc kubenswrapper[4851]: I0223 13:12:07.881255 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-788lc" podStartSLOduration=2.084550081 podStartE2EDuration="44.88123666s" podCreationTimestamp="2026-02-23 13:11:23 +0000 UTC" firstStartedPulling="2026-02-23 13:11:24.489895171 +0000 UTC m=+239.171598849" lastFinishedPulling="2026-02-23 13:12:07.28658175 +0000 UTC m=+281.968285428" observedRunningTime="2026-02-23 13:12:07.87872997 +0000 UTC m=+282.560433658" watchObservedRunningTime="2026-02-23 13:12:07.88123666 +0000 UTC m=+282.562940338" Feb 23 13:12:08 crc kubenswrapper[4851]: I0223 13:12:08.858263 4851 generic.go:334] "Generic (PLEG): container finished" podID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerID="ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0" exitCode=0 Feb 23 13:12:08 crc kubenswrapper[4851]: I0223 13:12:08.858352 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z5sm" event={"ID":"1a0cc846-351c-4e97-a412-82e4a82863bd","Type":"ContainerDied","Data":"ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0"} Feb 23 13:12:09 crc kubenswrapper[4851]: I0223 13:12:09.922741 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:12:09 crc kubenswrapper[4851]: I0223 13:12:09.922837 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:12:09 crc kubenswrapper[4851]: I0223 13:12:09.988860 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:12:10 crc kubenswrapper[4851]: I0223 13:12:10.112478 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:12:10 crc kubenswrapper[4851]: I0223 13:12:10.112820 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:12:10 crc kubenswrapper[4851]: I0223 13:12:10.149099 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:12:10 crc kubenswrapper[4851]: I0223 13:12:10.905313 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:12:11 crc kubenswrapper[4851]: I0223 13:12:11.877131 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z5sm" event={"ID":"1a0cc846-351c-4e97-a412-82e4a82863bd","Type":"ContainerStarted","Data":"f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d"} Feb 23 13:12:11 crc kubenswrapper[4851]: I0223 13:12:11.894651 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4z5sm" podStartSLOduration=2.137780925 podStartE2EDuration="51.894635961s" podCreationTimestamp="2026-02-23 13:11:20 +0000 UTC" firstStartedPulling="2026-02-23 13:11:21.354677699 +0000 UTC m=+236.036381377" lastFinishedPulling="2026-02-23 13:12:11.111532735 +0000 UTC m=+285.793236413" observedRunningTime="2026-02-23 13:12:11.891877904 +0000 UTC m=+286.573581592" watchObservedRunningTime="2026-02-23 13:12:11.894635961 +0000 UTC m=+286.576339639" Feb 23 13:12:11 crc kubenswrapper[4851]: I0223 13:12:11.925236 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:12:11 crc kubenswrapper[4851]: I0223 13:12:11.925307 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:12:12 crc kubenswrapper[4851]: I0223 13:12:12.117031 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:12:12 crc kubenswrapper[4851]: I0223 13:12:12.483516 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:12:12 crc kubenswrapper[4851]: I0223 13:12:12.483609 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:12:12 crc kubenswrapper[4851]: I0223 13:12:12.522029 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:12:12 crc kubenswrapper[4851]: I0223 13:12:12.940954 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:12:13 crc kubenswrapper[4851]: I0223 13:12:13.101890 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:12:13 crc kubenswrapper[4851]: I0223 13:12:13.102207 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:12:13 crc kubenswrapper[4851]: I0223 13:12:13.139919 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:12:13 crc kubenswrapper[4851]: I0223 13:12:13.488296 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:12:13 crc kubenswrapper[4851]: I0223 13:12:13.488348 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:12:13 crc kubenswrapper[4851]: I0223 13:12:13.522585 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:12:13 crc kubenswrapper[4851]: I0223 13:12:13.950704 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:12:13 crc kubenswrapper[4851]: I0223 13:12:13.981242 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.156172 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmph8"] Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.156454 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vmph8" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerName="registry-server" containerID="cri-o://c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8" gracePeriod=2 Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.581099 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.655196 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-utilities\") pod \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.655290 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-catalog-content\") pod \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.655443 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnhvp\" (UniqueName: \"kubernetes.io/projected/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-kube-api-access-xnhvp\") pod \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\" (UID: \"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2\") " Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.656085 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-utilities" (OuterVolumeSpecName: "utilities") pod "bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" (UID: "bff4c5c5-1d60-44d2-abf6-99c7ae6883d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.661058 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-kube-api-access-xnhvp" (OuterVolumeSpecName: "kube-api-access-xnhvp") pod "bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" (UID: "bff4c5c5-1d60-44d2-abf6-99c7ae6883d2"). InnerVolumeSpecName "kube-api-access-xnhvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.681732 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" (UID: "bff4c5c5-1d60-44d2-abf6-99c7ae6883d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.757147 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnhvp\" (UniqueName: \"kubernetes.io/projected/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-kube-api-access-xnhvp\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.757184 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.757194 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.934920 4851 generic.go:334] "Generic (PLEG): container finished" podID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerID="c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8" exitCode=0 Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.934967 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmph8" event={"ID":"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2","Type":"ContainerDied","Data":"c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8"} Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.934998 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmph8" event={"ID":"bff4c5c5-1d60-44d2-abf6-99c7ae6883d2","Type":"ContainerDied","Data":"291ad3f2703e86d777453612a8769f58f326acb7e2b651566c3bf6931fbe0f5c"} Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.935018 4851 scope.go:117] "RemoveContainer" containerID="c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.935150 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmph8" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.966739 4851 scope.go:117] "RemoveContainer" containerID="ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30" Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.967253 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmph8"] Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.971349 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmph8"] Feb 23 13:12:16 crc kubenswrapper[4851]: I0223 13:12:16.994206 4851 scope.go:117] "RemoveContainer" containerID="d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.009931 4851 scope.go:117] "RemoveContainer" containerID="c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8" Feb 23 13:12:17 crc kubenswrapper[4851]: E0223 13:12:17.010424 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8\": container with ID starting with c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8 not found: ID does not exist" containerID="c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.010477 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8"} err="failed to get container status \"c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8\": rpc error: code = NotFound desc = could not find container \"c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8\": container with ID starting with c1360b4573e4d9411c8beb83fd2c24d5d5395bfce76f266584c628332e5743d8 not found: ID does not exist" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.010532 4851 scope.go:117] "RemoveContainer" containerID="ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30" Feb 23 13:12:17 crc kubenswrapper[4851]: E0223 13:12:17.010908 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30\": container with ID starting with ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30 not found: ID does not exist" containerID="ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.010963 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30"} err="failed to get container status \"ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30\": rpc error: code = NotFound desc = could not find container \"ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30\": container with ID starting with ad476b0a6c70dcf5bd48d4d950f23f692407584d628553d14823d1e007de2e30 not found: ID does not exist" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.010997 4851 scope.go:117] "RemoveContainer" containerID="d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37" Feb 23 13:12:17 crc kubenswrapper[4851]: E0223 13:12:17.011404 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37\": container with ID starting with d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37 not found: ID does not exist" containerID="d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.011441 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37"} err="failed to get container status \"d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37\": rpc error: code = NotFound desc = could not find container \"d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37\": container with ID starting with d64dd91c3b35d7f4cca7d33c6ada27fe8e0fb01f356d54a73889a68cc780fe37 not found: ID does not exist" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.157704 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-788lc"] Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.158023 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-788lc" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerName="registry-server" containerID="cri-o://18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471" gracePeriod=2 Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.529032 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.568697 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmq5m\" (UniqueName: \"kubernetes.io/projected/5be292af-08fb-47c0-8665-0e3880fc8b63-kube-api-access-bmq5m\") pod \"5be292af-08fb-47c0-8665-0e3880fc8b63\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.568764 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-catalog-content\") pod \"5be292af-08fb-47c0-8665-0e3880fc8b63\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.568794 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-utilities\") pod \"5be292af-08fb-47c0-8665-0e3880fc8b63\" (UID: \"5be292af-08fb-47c0-8665-0e3880fc8b63\") " Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.569608 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-utilities" (OuterVolumeSpecName: "utilities") pod "5be292af-08fb-47c0-8665-0e3880fc8b63" (UID: "5be292af-08fb-47c0-8665-0e3880fc8b63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.574053 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be292af-08fb-47c0-8665-0e3880fc8b63-kube-api-access-bmq5m" (OuterVolumeSpecName: "kube-api-access-bmq5m") pod "5be292af-08fb-47c0-8665-0e3880fc8b63" (UID: "5be292af-08fb-47c0-8665-0e3880fc8b63"). InnerVolumeSpecName "kube-api-access-bmq5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.669904 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.670239 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmq5m\" (UniqueName: \"kubernetes.io/projected/5be292af-08fb-47c0-8665-0e3880fc8b63-kube-api-access-bmq5m\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.706817 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5be292af-08fb-47c0-8665-0e3880fc8b63" (UID: "5be292af-08fb-47c0-8665-0e3880fc8b63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.771979 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be292af-08fb-47c0-8665-0e3880fc8b63-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.950776 4851 generic.go:334] "Generic (PLEG): container finished" podID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerID="18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471" exitCode=0 Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.950822 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-788lc" event={"ID":"5be292af-08fb-47c0-8665-0e3880fc8b63","Type":"ContainerDied","Data":"18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471"} Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.950870 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-788lc" event={"ID":"5be292af-08fb-47c0-8665-0e3880fc8b63","Type":"ContainerDied","Data":"0ad34a3c40889187bc750274cce9dd1057d53dcba1a56f885994ae3477b18021"} Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.950892 4851 scope.go:117] "RemoveContainer" containerID="18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.950895 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-788lc" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.965481 4851 scope.go:117] "RemoveContainer" containerID="865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.975221 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" path="/var/lib/kubelet/pods/bff4c5c5-1d60-44d2-abf6-99c7ae6883d2/volumes" Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.975879 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-788lc"] Feb 23 13:12:17 crc kubenswrapper[4851]: I0223 13:12:17.979248 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-788lc"] Feb 23 13:12:18 crc kubenswrapper[4851]: I0223 13:12:18.000873 4851 scope.go:117] "RemoveContainer" containerID="4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24" Feb 23 13:12:18 crc kubenswrapper[4851]: I0223 13:12:18.020429 4851 scope.go:117] "RemoveContainer" containerID="18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471" Feb 23 13:12:18 crc kubenswrapper[4851]: E0223 13:12:18.021033 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471\": container with ID starting with 18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471 not found: ID does not exist" containerID="18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471" Feb 23 13:12:18 crc kubenswrapper[4851]: I0223 13:12:18.021149 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471"} err="failed to get container status \"18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471\": rpc error: code = NotFound desc = could not find container \"18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471\": container with ID starting with 18304f4ce9e8cab90b6f8b7701b7dcc3acc7e04778bd3397592acf8fa8c2d471 not found: ID does not exist" Feb 23 13:12:18 crc kubenswrapper[4851]: I0223 13:12:18.021242 4851 scope.go:117] "RemoveContainer" containerID="865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1" Feb 23 13:12:18 crc kubenswrapper[4851]: E0223 13:12:18.021758 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1\": container with ID starting with 865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1 not found: ID does not exist" containerID="865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1" Feb 23 13:12:18 crc kubenswrapper[4851]: I0223 13:12:18.021825 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1"} err="failed to get container status \"865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1\": rpc error: code = NotFound desc = could not find container \"865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1\": container with ID starting with 865b9dce4d8593de6a9b7eb195fb87a4c5043774b11bb018607112290721b5b1 not found: ID does not exist" Feb 23 13:12:18 crc kubenswrapper[4851]: I0223 13:12:18.021855 4851 scope.go:117] "RemoveContainer" containerID="4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24" Feb 23 13:12:18 crc kubenswrapper[4851]: E0223 13:12:18.022197 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24\": container with ID starting with 4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24 not found: ID does not exist" containerID="4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24" Feb 23 13:12:18 crc kubenswrapper[4851]: I0223 13:12:18.022284 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24"} err="failed to get container status \"4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24\": rpc error: code = NotFound desc = could not find container \"4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24\": container with ID starting with 4066bb2c8050d45a45c0725ed184507bd159ec8d5846b4e4e4231c969bc47f24 not found: ID does not exist" Feb 23 13:12:19 crc kubenswrapper[4851]: I0223 13:12:19.963832 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:12:19 crc kubenswrapper[4851]: I0223 13:12:19.976811 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" path="/var/lib/kubelet/pods/5be292af-08fb-47c0-8665-0e3880fc8b63/volumes" Feb 23 13:12:20 crc kubenswrapper[4851]: I0223 13:12:20.492203 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:12:20 crc kubenswrapper[4851]: I0223 13:12:20.492265 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:12:20 crc kubenswrapper[4851]: I0223 13:12:20.549682 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:12:21 crc kubenswrapper[4851]: I0223 13:12:21.028146 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:12:22 crc kubenswrapper[4851]: I0223 13:12:22.550636 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z5sm"] Feb 23 13:12:22 crc kubenswrapper[4851]: I0223 13:12:22.981768 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4z5sm" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerName="registry-server" containerID="cri-o://f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d" gracePeriod=2 Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.344929 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.447425 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbftk\" (UniqueName: \"kubernetes.io/projected/1a0cc846-351c-4e97-a412-82e4a82863bd-kube-api-access-jbftk\") pod \"1a0cc846-351c-4e97-a412-82e4a82863bd\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.447517 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-utilities\") pod \"1a0cc846-351c-4e97-a412-82e4a82863bd\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.447662 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-catalog-content\") pod \"1a0cc846-351c-4e97-a412-82e4a82863bd\" (UID: \"1a0cc846-351c-4e97-a412-82e4a82863bd\") " Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.449096 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-utilities" (OuterVolumeSpecName: "utilities") pod "1a0cc846-351c-4e97-a412-82e4a82863bd" (UID: "1a0cc846-351c-4e97-a412-82e4a82863bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.453691 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0cc846-351c-4e97-a412-82e4a82863bd-kube-api-access-jbftk" (OuterVolumeSpecName: "kube-api-access-jbftk") pod "1a0cc846-351c-4e97-a412-82e4a82863bd" (UID: "1a0cc846-351c-4e97-a412-82e4a82863bd"). InnerVolumeSpecName "kube-api-access-jbftk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.499662 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a0cc846-351c-4e97-a412-82e4a82863bd" (UID: "1a0cc846-351c-4e97-a412-82e4a82863bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.550030 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.550085 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbftk\" (UniqueName: \"kubernetes.io/projected/1a0cc846-351c-4e97-a412-82e4a82863bd-kube-api-access-jbftk\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.550106 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0cc846-351c-4e97-a412-82e4a82863bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.989751 4851 generic.go:334] "Generic (PLEG): container finished" podID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerID="f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d" exitCode=0 Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.989803 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z5sm" event={"ID":"1a0cc846-351c-4e97-a412-82e4a82863bd","Type":"ContainerDied","Data":"f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d"} Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.989834 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z5sm" event={"ID":"1a0cc846-351c-4e97-a412-82e4a82863bd","Type":"ContainerDied","Data":"cbf6efef11e089aa90fb23c1a5eb67a5eb0eeb284803186f53fe70e59096a4c7"} Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.989856 4851 scope.go:117] "RemoveContainer" containerID="f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d" Feb 23 13:12:23 crc kubenswrapper[4851]: I0223 13:12:23.989854 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z5sm" Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.014148 4851 scope.go:117] "RemoveContainer" containerID="ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0" Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.025560 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z5sm"] Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.029274 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4z5sm"] Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.043457 4851 scope.go:117] "RemoveContainer" containerID="533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599" Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.070444 4851 scope.go:117] "RemoveContainer" containerID="f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d" Feb 23 13:12:24 crc kubenswrapper[4851]: E0223 13:12:24.070886 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d\": container with ID starting with f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d not found: ID does not exist" containerID="f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d" Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.070924 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d"} err="failed to get container status \"f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d\": rpc error: code = NotFound desc = could not find container \"f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d\": container with ID starting with f06216f2002871736f4ad5ce8fa79b15194f021c3bdf2a7849a1fe0f6599293d not found: ID does not exist" Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.070952 4851 scope.go:117] "RemoveContainer" containerID="ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0" Feb 23 13:12:24 crc kubenswrapper[4851]: E0223 13:12:24.071375 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0\": container with ID starting with ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0 not found: ID does not exist" containerID="ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0" Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.071415 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0"} err="failed to get container status \"ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0\": rpc error: code = NotFound desc = could not find container \"ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0\": container with ID starting with ea7d0a3ac6f77e57466ab79c82162c08407e5f070e2af3809a2740da1f0d7dd0 not found: ID does not exist" Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.071440 4851 scope.go:117] "RemoveContainer" containerID="533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599" Feb 23 13:12:24 crc kubenswrapper[4851]: E0223 13:12:24.071760 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599\": container with ID starting with 533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599 not found: ID does not exist" containerID="533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599" Feb 23 13:12:24 crc kubenswrapper[4851]: I0223 13:12:24.071791 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599"} err="failed to get container status \"533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599\": rpc error: code = NotFound desc = could not find container \"533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599\": container with ID starting with 533b5830ba896bc49c46b5b461e4ac7805f66236bce68c751e80401ba972c599 not found: ID does not exist" Feb 23 13:12:25 crc kubenswrapper[4851]: I0223 13:12:25.694947 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" podUID="3f659f30-3a3e-4031-a1bf-b26038294135" containerName="oauth-openshift" containerID="cri-o://d997d42df974edc95876bd437e03126d94648f2d4ae2608048f6d7bd9ab7e8c8" gracePeriod=15 Feb 23 13:12:25 crc kubenswrapper[4851]: I0223 13:12:25.785395 4851 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 23 13:12:25 crc kubenswrapper[4851]: I0223 13:12:25.975949 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" path="/var/lib/kubelet/pods/1a0cc846-351c-4e97-a412-82e4a82863bd/volumes" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.009781 4851 generic.go:334] "Generic (PLEG): container finished" podID="3f659f30-3a3e-4031-a1bf-b26038294135" containerID="d997d42df974edc95876bd437e03126d94648f2d4ae2608048f6d7bd9ab7e8c8" exitCode=0 Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.009830 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" event={"ID":"3f659f30-3a3e-4031-a1bf-b26038294135","Type":"ContainerDied","Data":"d997d42df974edc95876bd437e03126d94648f2d4ae2608048f6d7bd9ab7e8c8"} Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.119017 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196012 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-provider-selection\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196085 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-service-ca\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196107 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-login\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196123 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-router-certs\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196147 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-serving-cert\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196170 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-error\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196190 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-trusted-ca-bundle\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196223 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-audit-policies\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196253 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-session\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196288 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-idp-0-file-data\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196310 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-ocp-branding-template\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196400 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-cliconfig\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196429 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f659f30-3a3e-4031-a1bf-b26038294135-audit-dir\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.196473 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtm57\" (UniqueName: \"kubernetes.io/projected/3f659f30-3a3e-4031-a1bf-b26038294135-kube-api-access-qtm57\") pod \"3f659f30-3a3e-4031-a1bf-b26038294135\" (UID: \"3f659f30-3a3e-4031-a1bf-b26038294135\") " Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.197069 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.197394 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.197317 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.197506 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f659f30-3a3e-4031-a1bf-b26038294135-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.197547 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.204468 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.204833 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.205263 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.205492 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.205668 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.205845 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.206051 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.206060 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.210371 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f659f30-3a3e-4031-a1bf-b26038294135-kube-api-access-qtm57" (OuterVolumeSpecName: "kube-api-access-qtm57") pod "3f659f30-3a3e-4031-a1bf-b26038294135" (UID: "3f659f30-3a3e-4031-a1bf-b26038294135"). InnerVolumeSpecName "kube-api-access-qtm57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297635 4851 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297679 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297694 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297707 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297720 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297732 4851 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f659f30-3a3e-4031-a1bf-b26038294135-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297746 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtm57\" (UniqueName: \"kubernetes.io/projected/3f659f30-3a3e-4031-a1bf-b26038294135-kube-api-access-qtm57\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297758 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297773 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297785 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297796 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297808 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297820 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:26 crc kubenswrapper[4851]: I0223 13:12:26.297831 4851 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f659f30-3a3e-4031-a1bf-b26038294135-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:27 crc kubenswrapper[4851]: I0223 13:12:27.015888 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" event={"ID":"3f659f30-3a3e-4031-a1bf-b26038294135","Type":"ContainerDied","Data":"49b4825bee39bf0948b2a9dd96ac94970b222ba879130587d77a4f14286d3346"} Feb 23 13:12:27 crc kubenswrapper[4851]: I0223 13:12:27.015950 4851 scope.go:117] "RemoveContainer" containerID="d997d42df974edc95876bd437e03126d94648f2d4ae2608048f6d7bd9ab7e8c8" Feb 23 13:12:27 crc kubenswrapper[4851]: I0223 13:12:27.017443 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gllrl" Feb 23 13:12:27 crc kubenswrapper[4851]: I0223 13:12:27.063556 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gllrl"] Feb 23 13:12:27 crc kubenswrapper[4851]: I0223 13:12:27.067881 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gllrl"] Feb 23 13:12:27 crc kubenswrapper[4851]: I0223 13:12:27.983792 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f659f30-3a3e-4031-a1bf-b26038294135" path="/var/lib/kubelet/pods/3f659f30-3a3e-4031-a1bf-b26038294135/volumes" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636093 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-59dc8c9c98-xng6m"] Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636666 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636678 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636689 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f659f30-3a3e-4031-a1bf-b26038294135" containerName="oauth-openshift" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636695 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f659f30-3a3e-4031-a1bf-b26038294135" containerName="oauth-openshift" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636703 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636708 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636717 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerName="extract-content" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636724 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerName="extract-content" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636731 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636736 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636746 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerName="extract-utilities" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636752 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerName="extract-utilities" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636760 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerName="extract-utilities" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636765 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerName="extract-utilities" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636772 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerName="extract-content" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636777 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerName="extract-content" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636785 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerName="extract-content" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636791 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerName="extract-content" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636800 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerName="extract-content" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636806 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerName="extract-content" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636815 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerName="extract-utilities" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636821 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerName="extract-utilities" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636834 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636840 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: E0223 13:12:34.636849 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerName="extract-utilities" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636855 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerName="extract-utilities" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636934 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff4c5c5-1d60-44d2-abf6-99c7ae6883d2" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636947 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="952fbf0b-c4b2-47ab-8897-0bae64960c3d" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636958 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be292af-08fb-47c0-8665-0e3880fc8b63" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636965 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0cc846-351c-4e97-a412-82e4a82863bd" containerName="registry-server" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.636972 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f659f30-3a3e-4031-a1bf-b26038294135" containerName="oauth-openshift" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.637302 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.639430 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.640069 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.640293 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.640535 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.640662 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.640977 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.641236 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.642086 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.642453 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.642607 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.647302 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.649564 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.650496 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59dc8c9c98-xng6m"] Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.650668 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.654880 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.654908 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724376 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-audit-dir\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724414 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724438 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-template-error\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724466 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-session\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724508 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-audit-policies\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724530 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724562 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724598 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724631 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmq8j\" (UniqueName: \"kubernetes.io/projected/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-kube-api-access-rmq8j\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724663 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724692 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-router-certs\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724714 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-service-ca\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724769 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-template-login\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.724789 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.825572 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-session\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.825919 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-audit-policies\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.826026 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.826917 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.827032 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.827153 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmq8j\" (UniqueName: \"kubernetes.io/projected/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-kube-api-access-rmq8j\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.827349 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.827489 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-router-certs\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.827602 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-service-ca\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.827722 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-template-login\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.827873 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.828027 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-audit-dir\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.828119 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.828240 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-template-error\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.829197 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-audit-dir\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.829122 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.830038 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-audit-policies\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.834317 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.834601 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-template-error\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.834792 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-router-certs\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.835015 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-service-ca\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.835217 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-session\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.835237 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.841012 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.841418 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-template-login\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.841833 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.843573 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.845716 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmq8j\" (UniqueName: \"kubernetes.io/projected/6983caf6-67f8-4ce6-8870-e7f85ca3bc57-kube-api-access-rmq8j\") pod \"oauth-openshift-59dc8c9c98-xng6m\" (UID: \"6983caf6-67f8-4ce6-8870-e7f85ca3bc57\") " pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:34 crc kubenswrapper[4851]: I0223 13:12:34.985057 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:35 crc kubenswrapper[4851]: I0223 13:12:35.430615 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59dc8c9c98-xng6m"] Feb 23 13:12:36 crc kubenswrapper[4851]: I0223 13:12:36.067056 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" event={"ID":"6983caf6-67f8-4ce6-8870-e7f85ca3bc57","Type":"ContainerStarted","Data":"118c1a07a8a1b7ebbd31e1d62dbcf2bd340658b220dc645a75636c99a326e2a6"} Feb 23 13:12:36 crc kubenswrapper[4851]: I0223 13:12:36.067133 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" event={"ID":"6983caf6-67f8-4ce6-8870-e7f85ca3bc57","Type":"ContainerStarted","Data":"6b44f3b94518e922a06f31afa013d281f075b6a791915f256ec4ac1cfb171a54"} Feb 23 13:12:36 crc kubenswrapper[4851]: I0223 13:12:36.067612 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:36 crc kubenswrapper[4851]: I0223 13:12:36.237921 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" Feb 23 13:12:36 crc kubenswrapper[4851]: I0223 13:12:36.259304 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-59dc8c9c98-xng6m" podStartSLOduration=36.259285838 podStartE2EDuration="36.259285838s" podCreationTimestamp="2026-02-23 13:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:12:36.088939889 +0000 UTC m=+310.770643657" watchObservedRunningTime="2026-02-23 13:12:36.259285838 +0000 UTC m=+310.940989516" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.782951 4851 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.783942 4851 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784118 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784235 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7" gracePeriod=15 Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784277 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf" gracePeriod=15 Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784314 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78" gracePeriod=15 Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784353 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407" gracePeriod=15 Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784298 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871" gracePeriod=15 Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784772 4851 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.784929 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784948 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.784960 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784968 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.784977 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.784984 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.784993 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785001 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.785012 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785019 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.785031 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785039 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.785051 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785059 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.785069 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785076 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.785085 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785092 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.785105 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785112 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785235 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785246 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785256 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785267 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785278 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785288 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785295 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785303 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.785533 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.822865 4851 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.5:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.918857 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.918936 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.919047 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.919177 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.919204 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.919404 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.919464 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.919653 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.924624 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.924680 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.924731 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.925530 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:12:41 crc kubenswrapper[4851]: E0223 13:12:41.925448 4851 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-npswg.1896e247eab38fed\": dial tcp 38.102.83.5:6443: connect: connection refused" event=< Feb 23 13:12:41 crc kubenswrapper[4851]: &Event{ObjectMeta:{machine-config-daemon-npswg.1896e247eab38fed openshift-machine-config-operator 29474 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-npswg,UID:c5a296ee-a904-4283-8849-65abb16717b4,APIVersion:v1,ResourceVersion:26946,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Feb 23 13:12:41 crc kubenswrapper[4851]: body: Feb 23 13:12:41 crc kubenswrapper[4851]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 13:11:41 +0000 UTC,LastTimestamp:2026-02-23 13:12:41.924657419 +0000 UTC m=+316.606361097,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 23 13:12:41 crc kubenswrapper[4851]: > Feb 23 13:12:41 crc kubenswrapper[4851]: I0223 13:12:41.925587 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626" gracePeriod=600 Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.021717 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.021799 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.021914 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.021954 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.022000 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.022019 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.022146 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.022235 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.022293 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.022555 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.022678 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.022705 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.022715 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.023154 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.023215 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.023275 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.097548 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.099002 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.099711 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407" exitCode=0 Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.099738 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf" exitCode=0 Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.099745 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871" exitCode=0 Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.099753 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78" exitCode=2 Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.099822 4851 scope.go:117] "RemoveContainer" containerID="93c0acfdeffbf83ea7e2b234bbe6186020c562f8c7dc1fda40daead2599326d1" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.105704 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626" exitCode=0 Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.105787 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626"} Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.107419 4851 generic.go:334] "Generic (PLEG): container finished" podID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" containerID="5fa681700ce4eddb6221e61ce706f988004861a76cfe67f3a82d08137b81572a" exitCode=0 Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.107452 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0","Type":"ContainerDied","Data":"5fa681700ce4eddb6221e61ce706f988004861a76cfe67f3a82d08137b81572a"} Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.108942 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:42 crc kubenswrapper[4851]: I0223 13:12:42.123979 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:42 crc kubenswrapper[4851]: W0223 13:12:42.146021 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-3b0dcc43c951afa553002f9306f76ddfc008296df01a10e96feab5d53e54eb4f WatchSource:0}: Error finding container 3b0dcc43c951afa553002f9306f76ddfc008296df01a10e96feab5d53e54eb4f: Status 404 returned error can't find the container with id 3b0dcc43c951afa553002f9306f76ddfc008296df01a10e96feab5d53e54eb4f Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.113462 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8"} Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.114012 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3b0dcc43c951afa553002f9306f76ddfc008296df01a10e96feab5d53e54eb4f"} Feb 23 13:12:43 crc kubenswrapper[4851]: E0223 13:12:43.114973 4851 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.5:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.115060 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.116646 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"7a4fffcd1de0aee50b0d802adbc8f5e9c57018a083d0f79caa2e97709f627f3e"} Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.117386 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.117732 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.119077 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.342037 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.342908 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.343384 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.440932 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kubelet-dir\") pod \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.440990 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kube-api-access\") pod \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.441043 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-var-lock\") pod \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\" (UID: \"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0\") " Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.441036 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" (UID: "b5cb35a1-1b1d-48c6-840d-6b0f90d765d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.441187 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-var-lock" (OuterVolumeSpecName: "var-lock") pod "b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" (UID: "b5cb35a1-1b1d-48c6-840d-6b0f90d765d0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.441524 4851 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.441541 4851 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.447505 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" (UID: "b5cb35a1-1b1d-48c6-840d-6b0f90d765d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:12:43 crc kubenswrapper[4851]: I0223 13:12:43.542675 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5cb35a1-1b1d-48c6-840d-6b0f90d765d0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.125439 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.125437 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b5cb35a1-1b1d-48c6-840d-6b0f90d765d0","Type":"ContainerDied","Data":"dfe568dc765e0890dacb60510a051c3f32809344f81fc8bda60ab7885ff2d69d"} Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.125846 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfe568dc765e0890dacb60510a051c3f32809344f81fc8bda60ab7885ff2d69d" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.128441 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.129045 4851 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7" exitCode=0 Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.129125 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41c177643683a2a55ecd1150b1ba4f4ac1a47f68935bf7609a6d31362c869ebc" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.152231 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.152562 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.154352 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.155083 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.155819 4851 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.156043 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.156317 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.250273 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.250340 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.250395 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.250401 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.250464 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.250561 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.250717 4851 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.250744 4851 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:44 crc kubenswrapper[4851]: I0223 13:12:44.250754 4851 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:12:44 crc kubenswrapper[4851]: E0223 13:12:44.255687 4851 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-npswg.1896e247eab38fed\": dial tcp 38.102.83.5:6443: connect: connection refused" event=< Feb 23 13:12:44 crc kubenswrapper[4851]: &Event{ObjectMeta:{machine-config-daemon-npswg.1896e247eab38fed openshift-machine-config-operator 29474 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-npswg,UID:c5a296ee-a904-4283-8849-65abb16717b4,APIVersion:v1,ResourceVersion:26946,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Feb 23 13:12:44 crc kubenswrapper[4851]: body: Feb 23 13:12:44 crc kubenswrapper[4851]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 13:11:41 +0000 UTC,LastTimestamp:2026-02-23 13:12:41.924657419 +0000 UTC m=+316.606361097,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 23 13:12:44 crc kubenswrapper[4851]: > Feb 23 13:12:45 crc kubenswrapper[4851]: I0223 13:12:45.137455 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:45 crc kubenswrapper[4851]: I0223 13:12:45.153352 4851 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:45 crc kubenswrapper[4851]: I0223 13:12:45.153754 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:45 crc kubenswrapper[4851]: I0223 13:12:45.154240 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:45 crc kubenswrapper[4851]: I0223 13:12:45.974998 4851 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:45 crc kubenswrapper[4851]: I0223 13:12:45.975760 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:45 crc kubenswrapper[4851]: I0223 13:12:45.976232 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:45 crc kubenswrapper[4851]: I0223 13:12:45.983415 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 23 13:12:48 crc kubenswrapper[4851]: I0223 13:12:48.930136 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:12:48 crc kubenswrapper[4851]: I0223 13:12:48.930620 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:12:48 crc kubenswrapper[4851]: W0223 13:12:48.931038 4851 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27334": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:48 crc kubenswrapper[4851]: E0223 13:12:48.931101 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27334\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:48 crc kubenswrapper[4851]: W0223 13:12:48.931231 4851 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27331": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:48 crc kubenswrapper[4851]: E0223 13:12:48.931302 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27331\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:49 crc kubenswrapper[4851]: I0223 13:12:49.032287 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:12:49 crc kubenswrapper[4851]: I0223 13:12:49.032372 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:12:49 crc kubenswrapper[4851]: W0223 13:12:49.032971 4851 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27334": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.033053 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27334\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.887967 4851 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.888492 4851 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.888763 4851 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.889002 4851 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.889243 4851 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:49 crc kubenswrapper[4851]: I0223 13:12:49.889272 4851 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.889563 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="200ms" Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.930853 4851 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.930901 4851 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.930951 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:51.930924817 +0000 UTC m=+446.612628495 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.930967 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:51.930961358 +0000 UTC m=+446.612665036 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:49 crc kubenswrapper[4851]: E0223 13:12:49.980644 4851 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.5:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" volumeName="registry-storage" Feb 23 13:12:50 crc kubenswrapper[4851]: E0223 13:12:50.032966 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:50 crc kubenswrapper[4851]: E0223 13:12:50.032990 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:50 crc kubenswrapper[4851]: W0223 13:12:50.033953 4851 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27334": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:50 crc kubenswrapper[4851]: E0223 13:12:50.034044 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27334\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:50 crc kubenswrapper[4851]: E0223 13:12:50.090666 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="400ms" Feb 23 13:12:50 crc kubenswrapper[4851]: E0223 13:12:50.491833 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="800ms" Feb 23 13:12:50 crc kubenswrapper[4851]: W0223 13:12:50.949154 4851 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27334": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:50 crc kubenswrapper[4851]: E0223 13:12:50.949235 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27334\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.034089 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.034132 4851 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.034187 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:53.034168614 +0000 UTC m=+447.715872292 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.034086 4851 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.034206 4851 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.034231 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 13:14:53.034225185 +0000 UTC m=+447.715928863 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.292277 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="1.6s" Feb 23 13:12:51 crc kubenswrapper[4851]: W0223 13:12:51.596456 4851 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27334": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.596842 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27334\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:51 crc kubenswrapper[4851]: W0223 13:12:51.598834 4851 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27331": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.598914 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27331\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.906345 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:12:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:12:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:12:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T13:12:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.906979 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.907464 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.907643 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.907792 4851 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:51 crc kubenswrapper[4851]: E0223 13:12:51.907846 4851 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 13:12:52 crc kubenswrapper[4851]: E0223 13:12:52.893195 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="3.2s" Feb 23 13:12:53 crc kubenswrapper[4851]: W0223 13:12:53.013317 4851 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27334": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:53 crc kubenswrapper[4851]: E0223 13:12:53.013443 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27334\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:54 crc kubenswrapper[4851]: E0223 13:12:54.256813 4851 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-npswg.1896e247eab38fed\": dial tcp 38.102.83.5:6443: connect: connection refused" event=< Feb 23 13:12:54 crc kubenswrapper[4851]: &Event{ObjectMeta:{machine-config-daemon-npswg.1896e247eab38fed openshift-machine-config-operator 29474 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-npswg,UID:c5a296ee-a904-4283-8849-65abb16717b4,APIVersion:v1,ResourceVersion:26946,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Feb 23 13:12:54 crc kubenswrapper[4851]: body: Feb 23 13:12:54 crc kubenswrapper[4851]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 13:11:41 +0000 UTC,LastTimestamp:2026-02-23 13:12:41.924657419 +0000 UTC m=+316.606361097,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 23 13:12:54 crc kubenswrapper[4851]: > Feb 23 13:12:54 crc kubenswrapper[4851]: W0223 13:12:54.306306 4851 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27334": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:54 crc kubenswrapper[4851]: E0223 13:12:54.306430 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27334\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.200059 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.200770 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.200839 4851 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa" exitCode=1 Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.200885 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa"} Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.201593 4851 scope.go:117] "RemoveContainer" containerID="7eae57f25a2c5c6185b9efefc8b3729b06b64180892ecced033765aaebf9b5fa" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.201862 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.202473 4851 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.202990 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:55 crc kubenswrapper[4851]: W0223 13:12:55.252500 4851 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27331": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:55 crc kubenswrapper[4851]: E0223 13:12:55.252546 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27331\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.968024 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.970747 4851 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.971292 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.971523 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.971702 4851 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.971874 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.972138 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.982750 4851 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.982777 4851 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:12:55 crc kubenswrapper[4851]: E0223 13:12:55.983138 4851 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:55 crc kubenswrapper[4851]: I0223 13:12:55.983604 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:56 crc kubenswrapper[4851]: W0223 13:12:56.003453 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-23a8b07235364d87ecaed46315f0693bb0ab5d52801a6b481a5225da53a56ab4 WatchSource:0}: Error finding container 23a8b07235364d87ecaed46315f0693bb0ab5d52801a6b481a5225da53a56ab4: Status 404 returned error can't find the container with id 23a8b07235364d87ecaed46315f0693bb0ab5d52801a6b481a5225da53a56ab4 Feb 23 13:12:56 crc kubenswrapper[4851]: E0223 13:12:56.095695 4851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="6.4s" Feb 23 13:12:56 crc kubenswrapper[4851]: I0223 13:12:56.210321 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 13:12:56 crc kubenswrapper[4851]: I0223 13:12:56.211120 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 13:12:56 crc kubenswrapper[4851]: I0223 13:12:56.211212 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ecd0a96774cbf8d0f09793ce56bfca7ab3cf28681b793b48c6c40ad7f06fd40"} Feb 23 13:12:56 crc kubenswrapper[4851]: I0223 13:12:56.212059 4851 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:56 crc kubenswrapper[4851]: I0223 13:12:56.212482 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:56 crc kubenswrapper[4851]: I0223 13:12:56.212576 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"23a8b07235364d87ecaed46315f0693bb0ab5d52801a6b481a5225da53a56ab4"} Feb 23 13:12:56 crc kubenswrapper[4851]: I0223 13:12:56.212715 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:57 crc kubenswrapper[4851]: W0223 13:12:57.170230 4851 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27334": dial tcp 38.102.83.5:6443: connect: connection refused Feb 23 13:12:57 crc kubenswrapper[4851]: E0223 13:12:57.170318 4851 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27334\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Feb 23 13:12:57 crc kubenswrapper[4851]: I0223 13:12:57.220860 4851 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1afce23b26446ea289f7bff8c1d00fd12d9853a356dd9ee1ffa3d0775b55d5b0" exitCode=0 Feb 23 13:12:57 crc kubenswrapper[4851]: I0223 13:12:57.220904 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1afce23b26446ea289f7bff8c1d00fd12d9853a356dd9ee1ffa3d0775b55d5b0"} Feb 23 13:12:57 crc kubenswrapper[4851]: I0223 13:12:57.221172 4851 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:12:57 crc kubenswrapper[4851]: I0223 13:12:57.221191 4851 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:12:57 crc kubenswrapper[4851]: E0223 13:12:57.221637 4851 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:57 crc kubenswrapper[4851]: I0223 13:12:57.221871 4851 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:57 crc kubenswrapper[4851]: I0223 13:12:57.222425 4851 status_manager.go:851] "Failed to get status for pod" podUID="c5a296ee-a904-4283-8849-65abb16717b4" pod="openshift-machine-config-operator/machine-config-daemon-npswg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-npswg\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:57 crc kubenswrapper[4851]: I0223 13:12:57.222794 4851 status_manager.go:851] "Failed to get status for pod" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.5:6443: connect: connection refused" Feb 23 13:12:58 crc kubenswrapper[4851]: I0223 13:12:58.245140 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"00c138dde14c7e756a120002886a903f1e7e21c9efe214abfb18b1597ff7a423"} Feb 23 13:12:58 crc kubenswrapper[4851]: I0223 13:12:58.246730 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7a334949c1f6a6bdea65858838c1825994fdf0d4ffd19e2318eea5000d09e766"} Feb 23 13:12:58 crc kubenswrapper[4851]: I0223 13:12:58.246831 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b650b0d55cc24f16225e514d4caa33c1a8fb5c191245a5b0933c2d773dd69464"} Feb 23 13:12:59 crc kubenswrapper[4851]: I0223 13:12:59.254574 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cc30409e1d9f9c6ed0751facce42538335d5e7774c1c64092c5586823ad214e7"} Feb 23 13:12:59 crc kubenswrapper[4851]: I0223 13:12:59.254619 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7eff0ef62430b932371c9fabe124365686608d8a9b013f3617645c432a99f49b"} Feb 23 13:12:59 crc kubenswrapper[4851]: I0223 13:12:59.254801 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:12:59 crc kubenswrapper[4851]: I0223 13:12:59.254982 4851 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:12:59 crc kubenswrapper[4851]: I0223 13:12:59.255017 4851 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:12:59 crc kubenswrapper[4851]: I0223 13:12:59.696759 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:12:59 crc kubenswrapper[4851]: I0223 13:12:59.701344 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:13:00 crc kubenswrapper[4851]: I0223 13:13:00.258561 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:13:00 crc kubenswrapper[4851]: I0223 13:13:00.984253 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:13:00 crc kubenswrapper[4851]: I0223 13:13:00.984603 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:13:00 crc kubenswrapper[4851]: I0223 13:13:00.989460 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:13:02 crc kubenswrapper[4851]: I0223 13:13:02.514787 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 13:13:04 crc kubenswrapper[4851]: I0223 13:13:04.319888 4851 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:13:04 crc kubenswrapper[4851]: I0223 13:13:04.329495 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 13:13:05 crc kubenswrapper[4851]: I0223 13:13:05.285111 4851 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:13:05 crc kubenswrapper[4851]: I0223 13:13:05.285140 4851 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:13:05 crc kubenswrapper[4851]: I0223 13:13:05.289659 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:13:05 crc kubenswrapper[4851]: I0223 13:13:05.398605 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 13:13:05 crc kubenswrapper[4851]: E0223 13:13:05.986805 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 13:13:05 crc kubenswrapper[4851]: E0223 13:13:05.994545 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 13:13:05 crc kubenswrapper[4851]: I0223 13:13:05.998788 4851 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="922d32e7-57ab-4497-bc79-10cf85decdab" Feb 23 13:13:06 crc kubenswrapper[4851]: E0223 13:13:06.003401 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 13:13:06 crc kubenswrapper[4851]: I0223 13:13:06.290177 4851 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:13:06 crc kubenswrapper[4851]: I0223 13:13:06.290208 4851 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:13:06 crc kubenswrapper[4851]: I0223 13:13:06.294468 4851 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="922d32e7-57ab-4497-bc79-10cf85decdab" Feb 23 13:13:07 crc kubenswrapper[4851]: I0223 13:13:07.082460 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 13:13:13 crc kubenswrapper[4851]: I0223 13:13:13.633105 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 13:13:14 crc kubenswrapper[4851]: I0223 13:13:14.308882 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 13:13:14 crc kubenswrapper[4851]: I0223 13:13:14.660340 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 13:13:14 crc kubenswrapper[4851]: I0223 13:13:14.756643 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 13:13:15 crc kubenswrapper[4851]: I0223 13:13:15.142496 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 13:13:15 crc kubenswrapper[4851]: I0223 13:13:15.167493 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 13:13:15 crc kubenswrapper[4851]: I0223 13:13:15.540213 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 13:13:15 crc kubenswrapper[4851]: I0223 13:13:15.939557 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 13:13:15 crc kubenswrapper[4851]: I0223 13:13:15.977582 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.121300 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.214532 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.345616 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.346238 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.386927 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.523049 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.548012 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.716280 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.738498 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.750774 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.819838 4851 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.923599 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 13:13:16 crc kubenswrapper[4851]: I0223 13:13:16.967823 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.078467 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.081415 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.098892 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.245726 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.290418 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.333169 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.398443 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.414299 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.451954 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.503912 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.530350 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.655908 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.710689 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.740716 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.789313 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.926773 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.940389 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 13:13:17 crc kubenswrapper[4851]: I0223 13:13:17.986842 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.070396 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.078456 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.121769 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.287923 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.322853 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.392009 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.488407 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.595415 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.619682 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.672599 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.791820 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.796924 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.842963 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.861626 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.898705 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.945661 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.968419 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:13:18 crc kubenswrapper[4851]: I0223 13:13:18.968444 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.014290 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.102010 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.161915 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.247243 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.259616 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.275624 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.355709 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.414616 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.628061 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.757232 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 13:13:19 crc kubenswrapper[4851]: I0223 13:13:19.928266 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.002711 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.020211 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.020275 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.054664 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.190828 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.293386 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.317599 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.334925 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.335086 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.435843 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.514397 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.553680 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.560643 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.608760 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.637392 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.670577 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.714149 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.751620 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.804940 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.816301 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.963987 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 13:13:20 crc kubenswrapper[4851]: I0223 13:13:20.970089 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.041872 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.048242 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.059241 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.142864 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.180864 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.200574 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.200632 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.251652 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.256731 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.265192 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.323408 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.456612 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.457778 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.458121 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.504882 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.549155 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.560082 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.570675 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.662655 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.665682 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.757648 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.757716 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.804671 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.843130 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.844256 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.869537 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.878895 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 13:13:21 crc kubenswrapper[4851]: I0223 13:13:21.897246 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.027050 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.114503 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.189355 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.283861 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.307748 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.322420 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.334885 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.355851 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.365880 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.376982 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.478729 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.559258 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.606554 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.687040 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.793481 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.807007 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.881142 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.907292 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 13:13:22 crc kubenswrapper[4851]: I0223 13:13:22.977176 4851 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.039793 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.110733 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.210357 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.219139 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.265079 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.425323 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.456654 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.500823 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.562923 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.584368 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.602955 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.639011 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.687072 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.701917 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.731054 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.873428 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.896677 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.922889 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.970383 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:13:23 crc kubenswrapper[4851]: I0223 13:13:23.982369 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.080368 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.179485 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.188975 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.292840 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.354012 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.372555 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.414012 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.426183 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.468962 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.490705 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.640640 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.709420 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.762043 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.906845 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.908659 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.915750 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.946910 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.982947 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 13:13:24 crc kubenswrapper[4851]: I0223 13:13:24.985853 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.022164 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.269282 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.285299 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.337106 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.392088 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.393576 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.434811 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.435795 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.460181 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.537928 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.598666 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.626954 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.678550 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.696757 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.712829 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.735908 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.740470 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.833856 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.884997 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.907901 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 13:13:25 crc kubenswrapper[4851]: I0223 13:13:25.918950 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.016972 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.047030 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.222238 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.224985 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.239667 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.252293 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.291890 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.310723 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.454194 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.664205 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.758497 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.768287 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.862039 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.944159 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 13:13:26 crc kubenswrapper[4851]: I0223 13:13:26.953361 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.000534 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.003177 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.026018 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.029624 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.067114 4851 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.076964 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.077062 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.077393 4851 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.077422 4851 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ed645ec9-2788-4c88-ac43-d030c18eb2a5" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.085609 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.102820 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.102789245 podStartE2EDuration="23.102789245s" podCreationTimestamp="2026-02-23 13:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:13:27.102483016 +0000 UTC m=+361.784186804" watchObservedRunningTime="2026-02-23 13:13:27.102789245 +0000 UTC m=+361.784492963" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.281683 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.470894 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.541556 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.576977 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.687846 4851 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.689776 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.745386 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 13:13:27 crc kubenswrapper[4851]: I0223 13:13:27.746486 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.024446 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.138752 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.204903 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.297876 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.461700 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.485298 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.522259 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.546822 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.556315 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.900557 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 13:13:28 crc kubenswrapper[4851]: I0223 13:13:28.947250 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 13:13:29 crc kubenswrapper[4851]: I0223 13:13:29.060387 4851 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 13:13:29 crc kubenswrapper[4851]: I0223 13:13:29.178668 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 13:13:29 crc kubenswrapper[4851]: I0223 13:13:29.237171 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 13:13:29 crc kubenswrapper[4851]: I0223 13:13:29.313585 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 13:13:29 crc kubenswrapper[4851]: I0223 13:13:29.437439 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 13:13:29 crc kubenswrapper[4851]: I0223 13:13:29.607283 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 13:13:29 crc kubenswrapper[4851]: I0223 13:13:29.870667 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 13:13:30 crc kubenswrapper[4851]: I0223 13:13:30.020352 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 13:13:30 crc kubenswrapper[4851]: I0223 13:13:30.466955 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 13:13:30 crc kubenswrapper[4851]: I0223 13:13:30.511527 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 13:13:30 crc kubenswrapper[4851]: I0223 13:13:30.844702 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 13:13:37 crc kubenswrapper[4851]: I0223 13:13:37.088766 4851 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 13:13:37 crc kubenswrapper[4851]: I0223 13:13:37.089244 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8" gracePeriod=5 Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.657934 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.658552 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.687506 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.687577 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.687620 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.687689 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.687719 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.687717 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.687775 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.687810 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.687856 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.688170 4851 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.688199 4851 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.688213 4851 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.688226 4851 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.694927 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.759300 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.759377 4851 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8" exitCode=137 Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.759424 4851 scope.go:117] "RemoveContainer" containerID="a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.759557 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.777214 4851 scope.go:117] "RemoveContainer" containerID="a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8" Feb 23 13:13:42 crc kubenswrapper[4851]: E0223 13:13:42.777674 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8\": container with ID starting with a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8 not found: ID does not exist" containerID="a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.777711 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8"} err="failed to get container status \"a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8\": rpc error: code = NotFound desc = could not find container \"a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8\": container with ID starting with a097f89bb66d9ed6b24d3b083943675a014ae5e37ba6fa0e8d63bd439e77e1b8 not found: ID does not exist" Feb 23 13:13:42 crc kubenswrapper[4851]: I0223 13:13:42.790244 4851 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 13:13:43 crc kubenswrapper[4851]: I0223 13:13:43.046112 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 13:13:43 crc kubenswrapper[4851]: I0223 13:13:43.976582 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 23 13:13:44 crc kubenswrapper[4851]: I0223 13:13:44.948461 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 13:13:47 crc kubenswrapper[4851]: I0223 13:13:47.723660 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 13:13:47 crc kubenswrapper[4851]: I0223 13:13:47.785141 4851 generic.go:334] "Generic (PLEG): container finished" podID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerID="e1d5b8fc9d296c19ee80f54824c3e88ec8bb7eed963e34745f387f68bead21c4" exitCode=0 Feb 23 13:13:47 crc kubenswrapper[4851]: I0223 13:13:47.785192 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" event={"ID":"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4","Type":"ContainerDied","Data":"e1d5b8fc9d296c19ee80f54824c3e88ec8bb7eed963e34745f387f68bead21c4"} Feb 23 13:13:47 crc kubenswrapper[4851]: I0223 13:13:47.785740 4851 scope.go:117] "RemoveContainer" containerID="e1d5b8fc9d296c19ee80f54824c3e88ec8bb7eed963e34745f387f68bead21c4" Feb 23 13:13:48 crc kubenswrapper[4851]: I0223 13:13:48.791506 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" event={"ID":"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4","Type":"ContainerStarted","Data":"1783f34cee5d7393a8b03b4c47271b2a2088cdc1eb8c8a6324b9b414f46be70a"} Feb 23 13:13:48 crc kubenswrapper[4851]: I0223 13:13:48.791838 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:13:48 crc kubenswrapper[4851]: I0223 13:13:48.793384 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:13:53 crc kubenswrapper[4851]: I0223 13:13:53.153194 4851 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 13:14:04 crc kubenswrapper[4851]: I0223 13:14:04.714543 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg"] Feb 23 13:14:04 crc kubenswrapper[4851]: I0223 13:14:04.715372 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" podUID="1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" containerName="controller-manager" containerID="cri-o://55390cbac440050c7fb3e25fb93f18ba38162557803572f43326096e06916761" gracePeriod=30 Feb 23 13:14:04 crc kubenswrapper[4851]: I0223 13:14:04.814058 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c"] Feb 23 13:14:04 crc kubenswrapper[4851]: I0223 13:14:04.814440 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" podUID="8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" containerName="route-controller-manager" containerID="cri-o://c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0" gracePeriod=30 Feb 23 13:14:04 crc kubenswrapper[4851]: I0223 13:14:04.921973 4851 generic.go:334] "Generic (PLEG): container finished" podID="1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" containerID="55390cbac440050c7fb3e25fb93f18ba38162557803572f43326096e06916761" exitCode=0 Feb 23 13:14:04 crc kubenswrapper[4851]: I0223 13:14:04.922066 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" event={"ID":"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e","Type":"ContainerDied","Data":"55390cbac440050c7fb3e25fb93f18ba38162557803572f43326096e06916761"} Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.063493 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.148182 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.172518 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-serving-cert\") pod \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.172693 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-proxy-ca-bundles\") pod \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.172778 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-config\") pod \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.172820 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6kkr\" (UniqueName: \"kubernetes.io/projected/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-kube-api-access-w6kkr\") pod \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.172871 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-client-ca\") pod \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\" (UID: \"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e\") " Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.173988 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-client-ca" (OuterVolumeSpecName: "client-ca") pod "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" (UID: "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.174174 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" (UID: "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.174784 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-config" (OuterVolumeSpecName: "config") pod "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" (UID: "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.179403 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-kube-api-access-w6kkr" (OuterVolumeSpecName: "kube-api-access-w6kkr") pod "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" (UID: "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e"). InnerVolumeSpecName "kube-api-access-w6kkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.179454 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" (UID: "1e6e8981-fe95-42ae-b8bf-6b732cb8f16e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.274198 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-config\") pod \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.274386 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztj2r\" (UniqueName: \"kubernetes.io/projected/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-kube-api-access-ztj2r\") pod \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.274440 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-serving-cert\") pod \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.274551 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-client-ca\") pod \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\" (UID: \"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f\") " Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.274913 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.274936 4851 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.274951 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.274964 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6kkr\" (UniqueName: \"kubernetes.io/projected/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-kube-api-access-w6kkr\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.274981 4851 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.275572 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" (UID: "8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.275596 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-config" (OuterVolumeSpecName: "config") pod "8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" (UID: "8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.277852 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" (UID: "8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.278051 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-kube-api-access-ztj2r" (OuterVolumeSpecName: "kube-api-access-ztj2r") pod "8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" (UID: "8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f"). InnerVolumeSpecName "kube-api-access-ztj2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.376603 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztj2r\" (UniqueName: \"kubernetes.io/projected/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-kube-api-access-ztj2r\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.376699 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.376713 4851 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.376725 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.554423 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58669588bd-gcmgl"] Feb 23 13:14:05 crc kubenswrapper[4851]: E0223 13:14:05.554984 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.555075 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 13:14:05 crc kubenswrapper[4851]: E0223 13:14:05.555163 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" containerName="route-controller-manager" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.555243 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" containerName="route-controller-manager" Feb 23 13:14:05 crc kubenswrapper[4851]: E0223 13:14:05.555311 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" containerName="installer" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.555402 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" containerName="installer" Feb 23 13:14:05 crc kubenswrapper[4851]: E0223 13:14:05.555494 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" containerName="controller-manager" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.555549 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" containerName="controller-manager" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.555882 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" containerName="controller-manager" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.555960 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.556017 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5cb35a1-1b1d-48c6-840d-6b0f90d765d0" containerName="installer" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.556077 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" containerName="route-controller-manager" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.556554 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.571613 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd"] Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.574798 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.578221 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58669588bd-gcmgl"] Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.593855 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd"] Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.678224 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58669588bd-gcmgl"] Feb 23 13:14:05 crc kubenswrapper[4851]: E0223 13:14:05.678866 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-dx87l proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" podUID="8e836012-66be-40b0-bd3b-f44617595b43" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.680406 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkjk\" (UniqueName: \"kubernetes.io/projected/72dc97cf-63e9-410d-bcf7-275b62060d7a-kube-api-access-6jkjk\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.680515 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-config\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.680599 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72dc97cf-63e9-410d-bcf7-275b62060d7a-serving-cert\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.680654 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-client-ca\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.680694 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx87l\" (UniqueName: \"kubernetes.io/projected/8e836012-66be-40b0-bd3b-f44617595b43-kube-api-access-dx87l\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.680744 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72dc97cf-63e9-410d-bcf7-275b62060d7a-client-ca\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.680814 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72dc97cf-63e9-410d-bcf7-275b62060d7a-config\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.680857 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e836012-66be-40b0-bd3b-f44617595b43-serving-cert\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.680881 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-proxy-ca-bundles\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.781744 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72dc97cf-63e9-410d-bcf7-275b62060d7a-config\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.781824 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e836012-66be-40b0-bd3b-f44617595b43-serving-cert\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.781848 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-proxy-ca-bundles\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.781895 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkjk\" (UniqueName: \"kubernetes.io/projected/72dc97cf-63e9-410d-bcf7-275b62060d7a-kube-api-access-6jkjk\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.781941 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-config\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.781975 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72dc97cf-63e9-410d-bcf7-275b62060d7a-serving-cert\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.782012 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-client-ca\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.782052 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx87l\" (UniqueName: \"kubernetes.io/projected/8e836012-66be-40b0-bd3b-f44617595b43-kube-api-access-dx87l\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.782103 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72dc97cf-63e9-410d-bcf7-275b62060d7a-client-ca\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.783112 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-proxy-ca-bundles\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.783494 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/72dc97cf-63e9-410d-bcf7-275b62060d7a-client-ca\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.783880 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-client-ca\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.784585 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-config\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.785087 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72dc97cf-63e9-410d-bcf7-275b62060d7a-config\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.785535 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e836012-66be-40b0-bd3b-f44617595b43-serving-cert\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.785992 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72dc97cf-63e9-410d-bcf7-275b62060d7a-serving-cert\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.798406 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx87l\" (UniqueName: \"kubernetes.io/projected/8e836012-66be-40b0-bd3b-f44617595b43-kube-api-access-dx87l\") pod \"controller-manager-58669588bd-gcmgl\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.799191 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkjk\" (UniqueName: \"kubernetes.io/projected/72dc97cf-63e9-410d-bcf7-275b62060d7a-kube-api-access-6jkjk\") pod \"route-controller-manager-6f47779bbd-4hczd\" (UID: \"72dc97cf-63e9-410d-bcf7-275b62060d7a\") " pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.894879 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.930252 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" event={"ID":"1e6e8981-fe95-42ae-b8bf-6b732cb8f16e","Type":"ContainerDied","Data":"a1da389fe5aa8e6a291184cc6f0d83af38e835cd75e8c205ba30e356b365610d"} Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.930302 4851 scope.go:117] "RemoveContainer" containerID="55390cbac440050c7fb3e25fb93f18ba38162557803572f43326096e06916761" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.930469 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.932946 4851 generic.go:334] "Generic (PLEG): container finished" podID="8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" containerID="c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0" exitCode=0 Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.933006 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.933140 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.933229 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" event={"ID":"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f","Type":"ContainerDied","Data":"c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0"} Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.933273 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c" event={"ID":"8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f","Type":"ContainerDied","Data":"6977f908a43c813f5ff09ab8c95b064c4fb8dc04b9b4e4e71cb0a77a870651c5"} Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.950173 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.950628 4851 scope.go:117] "RemoveContainer" containerID="c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.973288 4851 scope.go:117] "RemoveContainer" containerID="c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0" Feb 23 13:14:05 crc kubenswrapper[4851]: E0223 13:14:05.973649 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0\": container with ID starting with c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0 not found: ID does not exist" containerID="c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.973680 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0"} err="failed to get container status \"c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0\": rpc error: code = NotFound desc = could not find container \"c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0\": container with ID starting with c5c0312e9f722ae34281493bb4bd3792df7f306b509cfe339a7476b97cd90ed0 not found: ID does not exist" Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.984626 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg"] Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.984661 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fd9c5955b-mkjkg"] Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.987654 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c"] Feb 23 13:14:05 crc kubenswrapper[4851]: I0223 13:14:05.992010 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d8967cb8b-4fw2c"] Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.078720 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd"] Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.084702 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx87l\" (UniqueName: \"kubernetes.io/projected/8e836012-66be-40b0-bd3b-f44617595b43-kube-api-access-dx87l\") pod \"8e836012-66be-40b0-bd3b-f44617595b43\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.084750 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-client-ca\") pod \"8e836012-66be-40b0-bd3b-f44617595b43\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.084785 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-config\") pod \"8e836012-66be-40b0-bd3b-f44617595b43\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.084826 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e836012-66be-40b0-bd3b-f44617595b43-serving-cert\") pod \"8e836012-66be-40b0-bd3b-f44617595b43\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.084848 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-proxy-ca-bundles\") pod \"8e836012-66be-40b0-bd3b-f44617595b43\" (UID: \"8e836012-66be-40b0-bd3b-f44617595b43\") " Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.086787 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-client-ca" (OuterVolumeSpecName: "client-ca") pod "8e836012-66be-40b0-bd3b-f44617595b43" (UID: "8e836012-66be-40b0-bd3b-f44617595b43"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.087245 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-config" (OuterVolumeSpecName: "config") pod "8e836012-66be-40b0-bd3b-f44617595b43" (UID: "8e836012-66be-40b0-bd3b-f44617595b43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.087835 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8e836012-66be-40b0-bd3b-f44617595b43" (UID: "8e836012-66be-40b0-bd3b-f44617595b43"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.089511 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e836012-66be-40b0-bd3b-f44617595b43-kube-api-access-dx87l" (OuterVolumeSpecName: "kube-api-access-dx87l") pod "8e836012-66be-40b0-bd3b-f44617595b43" (UID: "8e836012-66be-40b0-bd3b-f44617595b43"). InnerVolumeSpecName "kube-api-access-dx87l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.090986 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e836012-66be-40b0-bd3b-f44617595b43-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8e836012-66be-40b0-bd3b-f44617595b43" (UID: "8e836012-66be-40b0-bd3b-f44617595b43"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.186389 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx87l\" (UniqueName: \"kubernetes.io/projected/8e836012-66be-40b0-bd3b-f44617595b43-kube-api-access-dx87l\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.186434 4851 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.186446 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.186457 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e836012-66be-40b0-bd3b-f44617595b43-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:06 crc kubenswrapper[4851]: I0223 13:14:06.186466 4851 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e836012-66be-40b0-bd3b-f44617595b43-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.223937 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58669588bd-gcmgl" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.223927 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" event={"ID":"72dc97cf-63e9-410d-bcf7-275b62060d7a","Type":"ContainerStarted","Data":"10e5037fd37cd2adb7b1e522f8a7f86c529f10e4d5578a3648028e9b3d40a700"} Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.223989 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" event={"ID":"72dc97cf-63e9-410d-bcf7-275b62060d7a","Type":"ContainerStarted","Data":"e3349c73a2a902b5702a8f52debc43a36b0e01159d838a241666cdab3c9e99c9"} Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.224304 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.229041 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.264302 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f47779bbd-4hczd" podStartSLOduration=2.2642845019999998 podStartE2EDuration="2.264284502s" podCreationTimestamp="2026-02-23 13:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:14:07.261387509 +0000 UTC m=+401.943091207" watchObservedRunningTime="2026-02-23 13:14:07.264284502 +0000 UTC m=+401.945988180" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.318089 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-686674c9b7-96vl7"] Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.318876 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.320963 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.321223 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.321780 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.322081 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.322088 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.322235 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.324417 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58669588bd-gcmgl"] Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.328889 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.334373 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58669588bd-gcmgl"] Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.339164 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-686674c9b7-96vl7"] Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.520481 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c380122-3354-469d-83ed-db030d5048ba-serving-cert\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.520587 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n48vh\" (UniqueName: \"kubernetes.io/projected/1c380122-3354-469d-83ed-db030d5048ba-kube-api-access-n48vh\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.520622 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-proxy-ca-bundles\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.520657 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-client-ca\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.520722 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-config\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.621754 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-proxy-ca-bundles\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.621810 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n48vh\" (UniqueName: \"kubernetes.io/projected/1c380122-3354-469d-83ed-db030d5048ba-kube-api-access-n48vh\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.621845 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-client-ca\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.621907 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-config\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.621960 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c380122-3354-469d-83ed-db030d5048ba-serving-cert\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.623068 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-client-ca\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.623398 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-config\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.623552 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-proxy-ca-bundles\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.627240 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c380122-3354-469d-83ed-db030d5048ba-serving-cert\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.638969 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n48vh\" (UniqueName: \"kubernetes.io/projected/1c380122-3354-469d-83ed-db030d5048ba-kube-api-access-n48vh\") pod \"controller-manager-686674c9b7-96vl7\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.937855 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.974217 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e6e8981-fe95-42ae-b8bf-6b732cb8f16e" path="/var/lib/kubelet/pods/1e6e8981-fe95-42ae-b8bf-6b732cb8f16e/volumes" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.975000 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f" path="/var/lib/kubelet/pods/8719ae7d-d2fe-47d1-aa24-8a9dfff97a0f/volumes" Feb 23 13:14:07 crc kubenswrapper[4851]: I0223 13:14:07.975628 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e836012-66be-40b0-bd3b-f44617595b43" path="/var/lib/kubelet/pods/8e836012-66be-40b0-bd3b-f44617595b43/volumes" Feb 23 13:14:08 crc kubenswrapper[4851]: I0223 13:14:08.091821 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-686674c9b7-96vl7"] Feb 23 13:14:08 crc kubenswrapper[4851]: W0223 13:14:08.096164 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c380122_3354_469d_83ed_db030d5048ba.slice/crio-fd8756ffed4fdb8bdac42f76af4cd9c3ea691267ced2f2fb7b9cf4d7f5605ae0 WatchSource:0}: Error finding container fd8756ffed4fdb8bdac42f76af4cd9c3ea691267ced2f2fb7b9cf4d7f5605ae0: Status 404 returned error can't find the container with id fd8756ffed4fdb8bdac42f76af4cd9c3ea691267ced2f2fb7b9cf4d7f5605ae0 Feb 23 13:14:08 crc kubenswrapper[4851]: I0223 13:14:08.577959 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" event={"ID":"1c380122-3354-469d-83ed-db030d5048ba","Type":"ContainerStarted","Data":"fd8756ffed4fdb8bdac42f76af4cd9c3ea691267ced2f2fb7b9cf4d7f5605ae0"} Feb 23 13:14:09 crc kubenswrapper[4851]: I0223 13:14:09.586126 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" event={"ID":"1c380122-3354-469d-83ed-db030d5048ba","Type":"ContainerStarted","Data":"6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065"} Feb 23 13:14:09 crc kubenswrapper[4851]: I0223 13:14:09.586627 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:09 crc kubenswrapper[4851]: I0223 13:14:09.591384 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:09 crc kubenswrapper[4851]: I0223 13:14:09.613828 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" podStartSLOduration=4.613804229 podStartE2EDuration="4.613804229s" podCreationTimestamp="2026-02-23 13:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:14:09.612775769 +0000 UTC m=+404.294479467" watchObservedRunningTime="2026-02-23 13:14:09.613804229 +0000 UTC m=+404.295507907" Feb 23 13:14:26 crc kubenswrapper[4851]: I0223 13:14:26.078030 4851 scope.go:117] "RemoveContainer" containerID="e874f43a228bb117413e15a3a3160c2c77f8cc57e529e782f74908cc071da3bf" Feb 23 13:14:26 crc kubenswrapper[4851]: I0223 13:14:26.092145 4851 scope.go:117] "RemoveContainer" containerID="a5010b7882ded80404182b75180db14e45852403d78d2e3f8dbdcd078d4db871" Feb 23 13:14:26 crc kubenswrapper[4851]: I0223 13:14:26.102210 4851 scope.go:117] "RemoveContainer" containerID="6138d9a8aeb4145999bcba96f232438b74dec95089e66b6c6197cf8f1f50fef2" Feb 23 13:14:26 crc kubenswrapper[4851]: I0223 13:14:26.119202 4851 scope.go:117] "RemoveContainer" containerID="2c7f4f2b842f11c5f33cb57cd3d5f554ff1fc872accec39647e0ba2dc47cd0f7" Feb 23 13:14:26 crc kubenswrapper[4851]: I0223 13:14:26.135930 4851 scope.go:117] "RemoveContainer" containerID="94ccb6525c27ff88c325822bd13c9749f8ba2126262d64221e0a6dadb1b0cd78" Feb 23 13:14:44 crc kubenswrapper[4851]: I0223 13:14:44.681429 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-686674c9b7-96vl7"] Feb 23 13:14:44 crc kubenswrapper[4851]: I0223 13:14:44.683139 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" podUID="1c380122-3354-469d-83ed-db030d5048ba" containerName="controller-manager" containerID="cri-o://6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065" gracePeriod=30 Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.081129 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.116364 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n48vh\" (UniqueName: \"kubernetes.io/projected/1c380122-3354-469d-83ed-db030d5048ba-kube-api-access-n48vh\") pod \"1c380122-3354-469d-83ed-db030d5048ba\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.116453 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-client-ca\") pod \"1c380122-3354-469d-83ed-db030d5048ba\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.116486 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-proxy-ca-bundles\") pod \"1c380122-3354-469d-83ed-db030d5048ba\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.116568 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c380122-3354-469d-83ed-db030d5048ba-serving-cert\") pod \"1c380122-3354-469d-83ed-db030d5048ba\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.116596 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-config\") pod \"1c380122-3354-469d-83ed-db030d5048ba\" (UID: \"1c380122-3354-469d-83ed-db030d5048ba\") " Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.117405 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "1c380122-3354-469d-83ed-db030d5048ba" (UID: "1c380122-3354-469d-83ed-db030d5048ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.117513 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-config" (OuterVolumeSpecName: "config") pod "1c380122-3354-469d-83ed-db030d5048ba" (UID: "1c380122-3354-469d-83ed-db030d5048ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.117968 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1c380122-3354-469d-83ed-db030d5048ba" (UID: "1c380122-3354-469d-83ed-db030d5048ba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.121835 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c380122-3354-469d-83ed-db030d5048ba-kube-api-access-n48vh" (OuterVolumeSpecName: "kube-api-access-n48vh") pod "1c380122-3354-469d-83ed-db030d5048ba" (UID: "1c380122-3354-469d-83ed-db030d5048ba"). InnerVolumeSpecName "kube-api-access-n48vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.122446 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c380122-3354-469d-83ed-db030d5048ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1c380122-3354-469d-83ed-db030d5048ba" (UID: "1c380122-3354-469d-83ed-db030d5048ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.218201 4851 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c380122-3354-469d-83ed-db030d5048ba-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.218540 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.218549 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n48vh\" (UniqueName: \"kubernetes.io/projected/1c380122-3354-469d-83ed-db030d5048ba-kube-api-access-n48vh\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.218560 4851 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.218568 4851 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c380122-3354-469d-83ed-db030d5048ba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.704491 4851 generic.go:334] "Generic (PLEG): container finished" podID="1c380122-3354-469d-83ed-db030d5048ba" containerID="6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065" exitCode=0 Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.704531 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.704551 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" event={"ID":"1c380122-3354-469d-83ed-db030d5048ba","Type":"ContainerDied","Data":"6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065"} Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.704594 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-686674c9b7-96vl7" event={"ID":"1c380122-3354-469d-83ed-db030d5048ba","Type":"ContainerDied","Data":"fd8756ffed4fdb8bdac42f76af4cd9c3ea691267ced2f2fb7b9cf4d7f5605ae0"} Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.704616 4851 scope.go:117] "RemoveContainer" containerID="6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.720611 4851 scope.go:117] "RemoveContainer" containerID="6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065" Feb 23 13:14:45 crc kubenswrapper[4851]: E0223 13:14:45.721033 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065\": container with ID starting with 6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065 not found: ID does not exist" containerID="6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.721077 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065"} err="failed to get container status \"6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065\": rpc error: code = NotFound desc = could not find container \"6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065\": container with ID starting with 6f21a53843d80a485a1005873411e1740ab0594d03aa550a276910787f39a065 not found: ID does not exist" Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.729608 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-686674c9b7-96vl7"] Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.732405 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-686674c9b7-96vl7"] Feb 23 13:14:45 crc kubenswrapper[4851]: I0223 13:14:45.976290 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c380122-3354-469d-83ed-db030d5048ba" path="/var/lib/kubelet/pods/1c380122-3354-469d-83ed-db030d5048ba/volumes" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.644830 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58669588bd-922zn"] Feb 23 13:14:46 crc kubenswrapper[4851]: E0223 13:14:46.645052 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c380122-3354-469d-83ed-db030d5048ba" containerName="controller-manager" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.645065 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c380122-3354-469d-83ed-db030d5048ba" containerName="controller-manager" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.645166 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c380122-3354-469d-83ed-db030d5048ba" containerName="controller-manager" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.645565 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.647729 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.647874 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.649661 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.649805 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.650306 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.650313 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.657110 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.660318 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58669588bd-922zn"] Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.743144 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fe656b-bbb9-4942-aad0-582d5e64fef3-client-ca\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.743313 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjr9q\" (UniqueName: \"kubernetes.io/projected/10fe656b-bbb9-4942-aad0-582d5e64fef3-kube-api-access-mjr9q\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.743429 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fe656b-bbb9-4942-aad0-582d5e64fef3-proxy-ca-bundles\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.743490 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fe656b-bbb9-4942-aad0-582d5e64fef3-config\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.743785 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fe656b-bbb9-4942-aad0-582d5e64fef3-serving-cert\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.845689 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fe656b-bbb9-4942-aad0-582d5e64fef3-serving-cert\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.845774 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fe656b-bbb9-4942-aad0-582d5e64fef3-client-ca\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.845830 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjr9q\" (UniqueName: \"kubernetes.io/projected/10fe656b-bbb9-4942-aad0-582d5e64fef3-kube-api-access-mjr9q\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.845868 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fe656b-bbb9-4942-aad0-582d5e64fef3-proxy-ca-bundles\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.845893 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fe656b-bbb9-4942-aad0-582d5e64fef3-config\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.847773 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10fe656b-bbb9-4942-aad0-582d5e64fef3-client-ca\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.847829 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10fe656b-bbb9-4942-aad0-582d5e64fef3-proxy-ca-bundles\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.847889 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10fe656b-bbb9-4942-aad0-582d5e64fef3-config\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.852505 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10fe656b-bbb9-4942-aad0-582d5e64fef3-serving-cert\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.865662 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjr9q\" (UniqueName: \"kubernetes.io/projected/10fe656b-bbb9-4942-aad0-582d5e64fef3-kube-api-access-mjr9q\") pod \"controller-manager-58669588bd-922zn\" (UID: \"10fe656b-bbb9-4942-aad0-582d5e64fef3\") " pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:46 crc kubenswrapper[4851]: I0223 13:14:46.961069 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.199548 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58669588bd-922zn"] Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.596825 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-crdlw"] Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.598415 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.626556 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-crdlw"] Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.659191 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05d19d48-9b92-4125-88f3-5b5d6f902d6e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.659253 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05d19d48-9b92-4125-88f3-5b5d6f902d6e-registry-certificates\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.659296 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05d19d48-9b92-4125-88f3-5b5d6f902d6e-bound-sa-token\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.659481 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.659554 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05d19d48-9b92-4125-88f3-5b5d6f902d6e-trusted-ca\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.659680 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05d19d48-9b92-4125-88f3-5b5d6f902d6e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.659786 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05d19d48-9b92-4125-88f3-5b5d6f902d6e-registry-tls\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.659813 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc9rd\" (UniqueName: \"kubernetes.io/projected/05d19d48-9b92-4125-88f3-5b5d6f902d6e-kube-api-access-vc9rd\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.690591 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.723285 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58669588bd-922zn" event={"ID":"10fe656b-bbb9-4942-aad0-582d5e64fef3","Type":"ContainerStarted","Data":"98b93c709af5a20e393948631a5a5c9b8f2c6fdaec98b644517d69b2aae486c4"} Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.723372 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58669588bd-922zn" event={"ID":"10fe656b-bbb9-4942-aad0-582d5e64fef3","Type":"ContainerStarted","Data":"893ad10e0eeeeea26a875b3ba28edc9afa4825d8acb7acde0fe10a6d7dc862c8"} Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.723583 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.728042 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58669588bd-922zn" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.743765 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58669588bd-922zn" podStartSLOduration=3.743730781 podStartE2EDuration="3.743730781s" podCreationTimestamp="2026-02-23 13:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:14:47.743565036 +0000 UTC m=+442.425268734" watchObservedRunningTime="2026-02-23 13:14:47.743730781 +0000 UTC m=+442.425434459" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.760765 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05d19d48-9b92-4125-88f3-5b5d6f902d6e-trusted-ca\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.760855 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05d19d48-9b92-4125-88f3-5b5d6f902d6e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.760937 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05d19d48-9b92-4125-88f3-5b5d6f902d6e-registry-tls\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.760962 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc9rd\" (UniqueName: \"kubernetes.io/projected/05d19d48-9b92-4125-88f3-5b5d6f902d6e-kube-api-access-vc9rd\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.761023 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05d19d48-9b92-4125-88f3-5b5d6f902d6e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.761046 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05d19d48-9b92-4125-88f3-5b5d6f902d6e-registry-certificates\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.761092 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05d19d48-9b92-4125-88f3-5b5d6f902d6e-bound-sa-token\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.761985 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/05d19d48-9b92-4125-88f3-5b5d6f902d6e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.762833 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/05d19d48-9b92-4125-88f3-5b5d6f902d6e-registry-certificates\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.763595 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05d19d48-9b92-4125-88f3-5b5d6f902d6e-trusted-ca\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.770653 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/05d19d48-9b92-4125-88f3-5b5d6f902d6e-registry-tls\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.770668 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/05d19d48-9b92-4125-88f3-5b5d6f902d6e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.785640 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc9rd\" (UniqueName: \"kubernetes.io/projected/05d19d48-9b92-4125-88f3-5b5d6f902d6e-kube-api-access-vc9rd\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.787104 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/05d19d48-9b92-4125-88f3-5b5d6f902d6e-bound-sa-token\") pod \"image-registry-66df7c8f76-crdlw\" (UID: \"05d19d48-9b92-4125-88f3-5b5d6f902d6e\") " pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:47 crc kubenswrapper[4851]: I0223 13:14:47.915037 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:48 crc kubenswrapper[4851]: I0223 13:14:48.124453 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-crdlw"] Feb 23 13:14:48 crc kubenswrapper[4851]: I0223 13:14:48.728258 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" event={"ID":"05d19d48-9b92-4125-88f3-5b5d6f902d6e","Type":"ContainerStarted","Data":"ca692eb266dc469896c4b836b4a6d29726266c81fe5098b731ca4ce42347b2f3"} Feb 23 13:14:48 crc kubenswrapper[4851]: I0223 13:14:48.728652 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" event={"ID":"05d19d48-9b92-4125-88f3-5b5d6f902d6e","Type":"ContainerStarted","Data":"124caf4ca701f96fad4793b590cff1e472e21e4928e071917f6bfbffa0907d5a"} Feb 23 13:14:48 crc kubenswrapper[4851]: I0223 13:14:48.755735 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" podStartSLOduration=1.755705717 podStartE2EDuration="1.755705717s" podCreationTimestamp="2026-02-23 13:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:14:48.753470643 +0000 UTC m=+443.435174341" watchObservedRunningTime="2026-02-23 13:14:48.755705717 +0000 UTC m=+443.437409395" Feb 23 13:14:49 crc kubenswrapper[4851]: I0223 13:14:49.732586 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:14:51 crc kubenswrapper[4851]: I0223 13:14:51.967958 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:14:51 crc kubenswrapper[4851]: I0223 13:14:51.968323 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:14:51 crc kubenswrapper[4851]: I0223 13:14:51.984220 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:14:52 crc kubenswrapper[4851]: I0223 13:14:52.442762 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:14:52 crc kubenswrapper[4851]: I0223 13:14:52.569477 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.103519 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.103839 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.107411 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.107687 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.171217 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.269343 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 13:14:53 crc kubenswrapper[4851]: W0223 13:14:53.635463 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-553aa5bf089b7e1c7a1ff1e2b7a6105e465173ea14c455ceedef4cc52c2ae90f WatchSource:0}: Error finding container 553aa5bf089b7e1c7a1ff1e2b7a6105e465173ea14c455ceedef4cc52c2ae90f: Status 404 returned error can't find the container with id 553aa5bf089b7e1c7a1ff1e2b7a6105e465173ea14c455ceedef4cc52c2ae90f Feb 23 13:14:53 crc kubenswrapper[4851]: W0223 13:14:53.708436 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-4f223a70ff2e9e5b4f2d4803c1a2e25d5ba374d3f161924631d5f84a416700bd WatchSource:0}: Error finding container 4f223a70ff2e9e5b4f2d4803c1a2e25d5ba374d3f161924631d5f84a416700bd: Status 404 returned error can't find the container with id 4f223a70ff2e9e5b4f2d4803c1a2e25d5ba374d3f161924631d5f84a416700bd Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.765004 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4f223a70ff2e9e5b4f2d4803c1a2e25d5ba374d3f161924631d5f84a416700bd"} Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.766306 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"553aa5bf089b7e1c7a1ff1e2b7a6105e465173ea14c455ceedef4cc52c2ae90f"} Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.767563 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"5956a8cd35a58a7e13219be2f216ba6f7898f70f8f9e18437c0cb3980f32d8d7"} Feb 23 13:14:53 crc kubenswrapper[4851]: I0223 13:14:53.767585 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3375a9906442b2050ba0e72d2a51e6cfea50c202a824e52d0ce01f9079e3dded"} Feb 23 13:14:54 crc kubenswrapper[4851]: I0223 13:14:54.775521 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d5d5b9be43736f65b774989b185e210e48a14844e48d6c8879e1a5df5b6dbfc0"} Feb 23 13:14:54 crc kubenswrapper[4851]: I0223 13:14:54.776633 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9e3180ff25713e1adb77f5a3949d634f97c9d3e3d2d649632433402d7ac6da8b"} Feb 23 13:14:54 crc kubenswrapper[4851]: I0223 13:14:54.776790 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.167555 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9"] Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.169180 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.172420 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.173177 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9"] Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.173783 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.293949 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-config-volume\") pod \"collect-profiles-29530875-xcnh9\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.293994 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh672\" (UniqueName: \"kubernetes.io/projected/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-kube-api-access-fh672\") pod \"collect-profiles-29530875-xcnh9\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.294042 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-secret-volume\") pod \"collect-profiles-29530875-xcnh9\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.395023 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-secret-volume\") pod \"collect-profiles-29530875-xcnh9\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.395099 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-config-volume\") pod \"collect-profiles-29530875-xcnh9\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.395125 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh672\" (UniqueName: \"kubernetes.io/projected/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-kube-api-access-fh672\") pod \"collect-profiles-29530875-xcnh9\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.396446 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-config-volume\") pod \"collect-profiles-29530875-xcnh9\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.401164 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-secret-volume\") pod \"collect-profiles-29530875-xcnh9\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.411413 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh672\" (UniqueName: \"kubernetes.io/projected/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-kube-api-access-fh672\") pod \"collect-profiles-29530875-xcnh9\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.489570 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:00 crc kubenswrapper[4851]: I0223 13:15:00.920560 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9"] Feb 23 13:15:01 crc kubenswrapper[4851]: I0223 13:15:01.821773 4851 generic.go:334] "Generic (PLEG): container finished" podID="4a565598-3ebd-4ad0-a2e9-7c06501d8e1b" containerID="23bce3b235d93ac46533a3a90f560b626d45cbf82c07a17f5b6765ad5bbef436" exitCode=0 Feb 23 13:15:01 crc kubenswrapper[4851]: I0223 13:15:01.821839 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" event={"ID":"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b","Type":"ContainerDied","Data":"23bce3b235d93ac46533a3a90f560b626d45cbf82c07a17f5b6765ad5bbef436"} Feb 23 13:15:01 crc kubenswrapper[4851]: I0223 13:15:01.821904 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" event={"ID":"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b","Type":"ContainerStarted","Data":"3507efd8fb65a3478a826f97f4e90fde0efb2942e19e92b37b2df2900f6dad3d"} Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.155773 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.335010 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-secret-volume\") pod \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.335229 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-config-volume\") pod \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.335520 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh672\" (UniqueName: \"kubernetes.io/projected/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-kube-api-access-fh672\") pod \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\" (UID: \"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b\") " Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.336009 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-config-volume" (OuterVolumeSpecName: "config-volume") pod "4a565598-3ebd-4ad0-a2e9-7c06501d8e1b" (UID: "4a565598-3ebd-4ad0-a2e9-7c06501d8e1b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.336196 4851 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.341254 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4a565598-3ebd-4ad0-a2e9-7c06501d8e1b" (UID: "4a565598-3ebd-4ad0-a2e9-7c06501d8e1b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.342146 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-kube-api-access-fh672" (OuterVolumeSpecName: "kube-api-access-fh672") pod "4a565598-3ebd-4ad0-a2e9-7c06501d8e1b" (UID: "4a565598-3ebd-4ad0-a2e9-7c06501d8e1b"). InnerVolumeSpecName "kube-api-access-fh672". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.437247 4851 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.437299 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh672\" (UniqueName: \"kubernetes.io/projected/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b-kube-api-access-fh672\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.838149 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" event={"ID":"4a565598-3ebd-4ad0-a2e9-7c06501d8e1b","Type":"ContainerDied","Data":"3507efd8fb65a3478a826f97f4e90fde0efb2942e19e92b37b2df2900f6dad3d"} Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.838185 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9" Feb 23 13:15:03 crc kubenswrapper[4851]: I0223 13:15:03.838187 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3507efd8fb65a3478a826f97f4e90fde0efb2942e19e92b37b2df2900f6dad3d" Feb 23 13:15:07 crc kubenswrapper[4851]: I0223 13:15:07.918984 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-crdlw" Feb 23 13:15:07 crc kubenswrapper[4851]: I0223 13:15:07.965411 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9hs4"] Feb 23 13:15:11 crc kubenswrapper[4851]: I0223 13:15:11.924741 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:15:11 crc kubenswrapper[4851]: I0223 13:15:11.925105 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:15:23 crc kubenswrapper[4851]: I0223 13:15:23.176770 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 13:15:26 crc kubenswrapper[4851]: I0223 13:15:26.178566 4851 scope.go:117] "RemoveContainer" containerID="23df1894bbb39d26536afa04500306de5a89cf8ef9478d3eba43f25ab9754407" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.005152 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" podUID="598d1af4-7f4c-4815-8b0c-bd364fcc191d" containerName="registry" containerID="cri-o://66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce" gracePeriod=30 Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.385447 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.466017 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/598d1af4-7f4c-4815-8b0c-bd364fcc191d-ca-trust-extracted\") pod \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.466061 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-certificates\") pod \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.466094 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-trusted-ca\") pod \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.466155 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/598d1af4-7f4c-4815-8b0c-bd364fcc191d-installation-pull-secrets\") pod \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.466273 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.466301 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-tls\") pod \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.466322 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-bound-sa-token\") pod \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.466378 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmdgz\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-kube-api-access-cmdgz\") pod \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\" (UID: \"598d1af4-7f4c-4815-8b0c-bd364fcc191d\") " Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.466915 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "598d1af4-7f4c-4815-8b0c-bd364fcc191d" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.467560 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "598d1af4-7f4c-4815-8b0c-bd364fcc191d" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.472846 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-kube-api-access-cmdgz" (OuterVolumeSpecName: "kube-api-access-cmdgz") pod "598d1af4-7f4c-4815-8b0c-bd364fcc191d" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d"). InnerVolumeSpecName "kube-api-access-cmdgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.473024 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598d1af4-7f4c-4815-8b0c-bd364fcc191d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "598d1af4-7f4c-4815-8b0c-bd364fcc191d" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.473097 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "598d1af4-7f4c-4815-8b0c-bd364fcc191d" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.473499 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "598d1af4-7f4c-4815-8b0c-bd364fcc191d" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.479727 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "598d1af4-7f4c-4815-8b0c-bd364fcc191d" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.496541 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/598d1af4-7f4c-4815-8b0c-bd364fcc191d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "598d1af4-7f4c-4815-8b0c-bd364fcc191d" (UID: "598d1af4-7f4c-4815-8b0c-bd364fcc191d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.567681 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmdgz\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-kube-api-access-cmdgz\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.567720 4851 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/598d1af4-7f4c-4815-8b0c-bd364fcc191d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.567752 4851 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.567768 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/598d1af4-7f4c-4815-8b0c-bd364fcc191d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.567780 4851 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/598d1af4-7f4c-4815-8b0c-bd364fcc191d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.567792 4851 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.567802 4851 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/598d1af4-7f4c-4815-8b0c-bd364fcc191d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.993877 4851 generic.go:334] "Generic (PLEG): container finished" podID="598d1af4-7f4c-4815-8b0c-bd364fcc191d" containerID="66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce" exitCode=0 Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.993944 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" event={"ID":"598d1af4-7f4c-4815-8b0c-bd364fcc191d","Type":"ContainerDied","Data":"66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce"} Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.994181 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" event={"ID":"598d1af4-7f4c-4815-8b0c-bd364fcc191d","Type":"ContainerDied","Data":"122abd91a31ccf77db4a48cf878a4573ec5c006ee2ec42edaef03fc543ea5597"} Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.994028 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-z9hs4" Feb 23 13:15:33 crc kubenswrapper[4851]: I0223 13:15:33.994202 4851 scope.go:117] "RemoveContainer" containerID="66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce" Feb 23 13:15:34 crc kubenswrapper[4851]: I0223 13:15:34.018023 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9hs4"] Feb 23 13:15:34 crc kubenswrapper[4851]: I0223 13:15:34.021778 4851 scope.go:117] "RemoveContainer" containerID="66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce" Feb 23 13:15:34 crc kubenswrapper[4851]: E0223 13:15:34.022213 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce\": container with ID starting with 66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce not found: ID does not exist" containerID="66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce" Feb 23 13:15:34 crc kubenswrapper[4851]: I0223 13:15:34.022256 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce"} err="failed to get container status \"66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce\": rpc error: code = NotFound desc = could not find container \"66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce\": container with ID starting with 66777af990de050ffffb240f2de23e1c2fe6f23513937ba5ac0fe2ba37feedce not found: ID does not exist" Feb 23 13:15:34 crc kubenswrapper[4851]: I0223 13:15:34.024447 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-z9hs4"] Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.814625 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpz2p"] Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.815522 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dpz2p" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerName="registry-server" containerID="cri-o://b570034b7a043905cb826d937b5f0211d8f3661095b665ea6f58066190b82334" gracePeriod=30 Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.817882 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vvh57"] Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.818127 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vvh57" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerName="registry-server" containerID="cri-o://0e412f72ce1da68caf5aedd67fd7c81eed4d2e40188ae60781d07ad5a28f6a25" gracePeriod=30 Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.828250 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wwn4t"] Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.828534 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerName="marketplace-operator" containerID="cri-o://1783f34cee5d7393a8b03b4c47271b2a2088cdc1eb8c8a6324b9b414f46be70a" gracePeriod=30 Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.841366 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vbb9"] Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.841666 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6vbb9" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerName="registry-server" containerID="cri-o://a341665c7d6ce87497efa544fcf0e5fe2561647d374d6a19518daa7adfecc60e" gracePeriod=30 Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.847278 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jxzb2"] Feb 23 13:15:35 crc kubenswrapper[4851]: E0223 13:15:35.847560 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a565598-3ebd-4ad0-a2e9-7c06501d8e1b" containerName="collect-profiles" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.847577 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a565598-3ebd-4ad0-a2e9-7c06501d8e1b" containerName="collect-profiles" Feb 23 13:15:35 crc kubenswrapper[4851]: E0223 13:15:35.847594 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598d1af4-7f4c-4815-8b0c-bd364fcc191d" containerName="registry" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.847602 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="598d1af4-7f4c-4815-8b0c-bd364fcc191d" containerName="registry" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.847721 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="598d1af4-7f4c-4815-8b0c-bd364fcc191d" containerName="registry" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.847736 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a565598-3ebd-4ad0-a2e9-7c06501d8e1b" containerName="collect-profiles" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.848189 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.863148 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mcn2z"] Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.863439 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mcn2z" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerName="registry-server" containerID="cri-o://740c1621bb4d17314c3ec5c38ff740e9d6dbeeab38efb232e64648e040f56950" gracePeriod=30 Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.866902 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jxzb2"] Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.896197 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz86c\" (UniqueName: \"kubernetes.io/projected/0d8139b6-0c9b-48cf-b664-44304568f2d1-kube-api-access-dz86c\") pod \"marketplace-operator-79b997595-jxzb2\" (UID: \"0d8139b6-0c9b-48cf-b664-44304568f2d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.896280 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d8139b6-0c9b-48cf-b664-44304568f2d1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jxzb2\" (UID: \"0d8139b6-0c9b-48cf-b664-44304568f2d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.896307 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d8139b6-0c9b-48cf-b664-44304568f2d1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jxzb2\" (UID: \"0d8139b6-0c9b-48cf-b664-44304568f2d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.976620 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598d1af4-7f4c-4815-8b0c-bd364fcc191d" path="/var/lib/kubelet/pods/598d1af4-7f4c-4815-8b0c-bd364fcc191d/volumes" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.998084 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz86c\" (UniqueName: \"kubernetes.io/projected/0d8139b6-0c9b-48cf-b664-44304568f2d1-kube-api-access-dz86c\") pod \"marketplace-operator-79b997595-jxzb2\" (UID: \"0d8139b6-0c9b-48cf-b664-44304568f2d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.998173 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d8139b6-0c9b-48cf-b664-44304568f2d1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jxzb2\" (UID: \"0d8139b6-0c9b-48cf-b664-44304568f2d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.998210 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d8139b6-0c9b-48cf-b664-44304568f2d1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jxzb2\" (UID: \"0d8139b6-0c9b-48cf-b664-44304568f2d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:35 crc kubenswrapper[4851]: I0223 13:15:35.999454 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0d8139b6-0c9b-48cf-b664-44304568f2d1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jxzb2\" (UID: \"0d8139b6-0c9b-48cf-b664-44304568f2d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.007130 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0d8139b6-0c9b-48cf-b664-44304568f2d1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jxzb2\" (UID: \"0d8139b6-0c9b-48cf-b664-44304568f2d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.013237 4851 generic.go:334] "Generic (PLEG): container finished" podID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerID="b570034b7a043905cb826d937b5f0211d8f3661095b665ea6f58066190b82334" exitCode=0 Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.013346 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpz2p" event={"ID":"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7","Type":"ContainerDied","Data":"b570034b7a043905cb826d937b5f0211d8f3661095b665ea6f58066190b82334"} Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.020740 4851 generic.go:334] "Generic (PLEG): container finished" podID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerID="1783f34cee5d7393a8b03b4c47271b2a2088cdc1eb8c8a6324b9b414f46be70a" exitCode=0 Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.020784 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" event={"ID":"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4","Type":"ContainerDied","Data":"1783f34cee5d7393a8b03b4c47271b2a2088cdc1eb8c8a6324b9b414f46be70a"} Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.020816 4851 scope.go:117] "RemoveContainer" containerID="e1d5b8fc9d296c19ee80f54824c3e88ec8bb7eed963e34745f387f68bead21c4" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.028362 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz86c\" (UniqueName: \"kubernetes.io/projected/0d8139b6-0c9b-48cf-b664-44304568f2d1-kube-api-access-dz86c\") pod \"marketplace-operator-79b997595-jxzb2\" (UID: \"0d8139b6-0c9b-48cf-b664-44304568f2d1\") " pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.031145 4851 generic.go:334] "Generic (PLEG): container finished" podID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerID="0e412f72ce1da68caf5aedd67fd7c81eed4d2e40188ae60781d07ad5a28f6a25" exitCode=0 Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.031258 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvh57" event={"ID":"4e65eda3-0eae-4672-9f18-c87148fcc449","Type":"ContainerDied","Data":"0e412f72ce1da68caf5aedd67fd7c81eed4d2e40188ae60781d07ad5a28f6a25"} Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.036282 4851 generic.go:334] "Generic (PLEG): container finished" podID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerID="a341665c7d6ce87497efa544fcf0e5fe2561647d374d6a19518daa7adfecc60e" exitCode=0 Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.036372 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vbb9" event={"ID":"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199","Type":"ContainerDied","Data":"a341665c7d6ce87497efa544fcf0e5fe2561647d374d6a19518daa7adfecc60e"} Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.049691 4851 generic.go:334] "Generic (PLEG): container finished" podID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerID="740c1621bb4d17314c3ec5c38ff740e9d6dbeeab38efb232e64648e040f56950" exitCode=0 Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.049727 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcn2z" event={"ID":"01938f77-146f-4d3d-a8f6-d1d4673ad3d4","Type":"ContainerDied","Data":"740c1621bb4d17314c3ec5c38ff740e9d6dbeeab38efb232e64648e040f56950"} Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.166278 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.301059 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.336482 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.357801 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.369227 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.369370 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403284 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-trusted-ca\") pod \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403355 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbg9v\" (UniqueName: \"kubernetes.io/projected/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-kube-api-access-tbg9v\") pod \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403384 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-catalog-content\") pod \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403415 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8r7w\" (UniqueName: \"kubernetes.io/projected/4e65eda3-0eae-4672-9f18-c87148fcc449-kube-api-access-v8r7w\") pod \"4e65eda3-0eae-4672-9f18-c87148fcc449\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403482 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-utilities\") pod \"4e65eda3-0eae-4672-9f18-c87148fcc449\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403514 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-utilities\") pod \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403538 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6grxx\" (UniqueName: \"kubernetes.io/projected/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-kube-api-access-6grxx\") pod \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403568 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-utilities\") pod \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403588 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-catalog-content\") pod \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403627 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-catalog-content\") pod \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\" (UID: \"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403661 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-operator-metrics\") pod \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\" (UID: \"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403692 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljn2m\" (UniqueName: \"kubernetes.io/projected/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-kube-api-access-ljn2m\") pod \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\" (UID: \"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403717 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-utilities\") pod \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403757 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-catalog-content\") pod \"4e65eda3-0eae-4672-9f18-c87148fcc449\" (UID: \"4e65eda3-0eae-4672-9f18-c87148fcc449\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.403817 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nb98g\" (UniqueName: \"kubernetes.io/projected/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-kube-api-access-nb98g\") pod \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\" (UID: \"01938f77-146f-4d3d-a8f6-d1d4673ad3d4\") " Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.406072 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" (UID: "c62cfc6b-827b-499f-a5c9-e8a1e89df8f4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.407165 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-utilities" (OuterVolumeSpecName: "utilities") pod "01938f77-146f-4d3d-a8f6-d1d4673ad3d4" (UID: "01938f77-146f-4d3d-a8f6-d1d4673ad3d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.409303 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-utilities" (OuterVolumeSpecName: "utilities") pod "e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" (UID: "e618f7f4-c1f6-40cd-aa78-e0f711acd1b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.410010 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-kube-api-access-nb98g" (OuterVolumeSpecName: "kube-api-access-nb98g") pod "01938f77-146f-4d3d-a8f6-d1d4673ad3d4" (UID: "01938f77-146f-4d3d-a8f6-d1d4673ad3d4"). InnerVolumeSpecName "kube-api-access-nb98g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.410054 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-kube-api-access-ljn2m" (OuterVolumeSpecName: "kube-api-access-ljn2m") pod "5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" (UID: "5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199"). InnerVolumeSpecName "kube-api-access-ljn2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.410116 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-utilities" (OuterVolumeSpecName: "utilities") pod "4e65eda3-0eae-4672-9f18-c87148fcc449" (UID: "4e65eda3-0eae-4672-9f18-c87148fcc449"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.411502 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-kube-api-access-tbg9v" (OuterVolumeSpecName: "kube-api-access-tbg9v") pod "e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" (UID: "e618f7f4-c1f6-40cd-aa78-e0f711acd1b7"). InnerVolumeSpecName "kube-api-access-tbg9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.413280 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e65eda3-0eae-4672-9f18-c87148fcc449-kube-api-access-v8r7w" (OuterVolumeSpecName: "kube-api-access-v8r7w") pod "4e65eda3-0eae-4672-9f18-c87148fcc449" (UID: "4e65eda3-0eae-4672-9f18-c87148fcc449"). InnerVolumeSpecName "kube-api-access-v8r7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.417087 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" (UID: "c62cfc6b-827b-499f-a5c9-e8a1e89df8f4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.430607 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-utilities" (OuterVolumeSpecName: "utilities") pod "5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" (UID: "5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.431837 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-kube-api-access-6grxx" (OuterVolumeSpecName: "kube-api-access-6grxx") pod "c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" (UID: "c62cfc6b-827b-499f-a5c9-e8a1e89df8f4"). InnerVolumeSpecName "kube-api-access-6grxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.444436 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" (UID: "5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.472388 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" (UID: "e618f7f4-c1f6-40cd-aa78-e0f711acd1b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.481143 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e65eda3-0eae-4672-9f18-c87148fcc449" (UID: "4e65eda3-0eae-4672-9f18-c87148fcc449"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506014 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nb98g\" (UniqueName: \"kubernetes.io/projected/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-kube-api-access-nb98g\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506041 4851 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506050 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbg9v\" (UniqueName: \"kubernetes.io/projected/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-kube-api-access-tbg9v\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506059 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8r7w\" (UniqueName: \"kubernetes.io/projected/4e65eda3-0eae-4672-9f18-c87148fcc449-kube-api-access-v8r7w\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506069 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506080 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6grxx\" (UniqueName: \"kubernetes.io/projected/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-kube-api-access-6grxx\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506088 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506096 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506104 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506112 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506121 4851 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506130 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljn2m\" (UniqueName: \"kubernetes.io/projected/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199-kube-api-access-ljn2m\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506139 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.506147 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e65eda3-0eae-4672-9f18-c87148fcc449-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.545727 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01938f77-146f-4d3d-a8f6-d1d4673ad3d4" (UID: "01938f77-146f-4d3d-a8f6-d1d4673ad3d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.606935 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01938f77-146f-4d3d-a8f6-d1d4673ad3d4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:15:36 crc kubenswrapper[4851]: I0223 13:15:36.698462 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jxzb2"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.056047 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vbb9" event={"ID":"5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199","Type":"ContainerDied","Data":"a14e1184f0390aa93fe6ecfa33ce9e1a2316d5c3ccaa2ad0f114b9fb114e590f"} Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.056383 4851 scope.go:117] "RemoveContainer" containerID="a341665c7d6ce87497efa544fcf0e5fe2561647d374d6a19518daa7adfecc60e" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.056089 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vbb9" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.060710 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcn2z" event={"ID":"01938f77-146f-4d3d-a8f6-d1d4673ad3d4","Type":"ContainerDied","Data":"868094ad0487832c2835c9baddb7f15e20cfe127ebbc462fb73c483457517986"} Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.060812 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcn2z" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.065224 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dpz2p" event={"ID":"e618f7f4-c1f6-40cd-aa78-e0f711acd1b7","Type":"ContainerDied","Data":"393e919b6d1e58c232cbf9098dad2ae7ab4097ef769f619a04fa5295f02b1799"} Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.065360 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dpz2p" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.066704 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" event={"ID":"c62cfc6b-827b-499f-a5c9-e8a1e89df8f4","Type":"ContainerDied","Data":"8df3ec11ef14c94d6c9207b3c8e5c5b2fd00318f279566716642002450739175"} Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.066799 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wwn4t" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.068869 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvh57" event={"ID":"4e65eda3-0eae-4672-9f18-c87148fcc449","Type":"ContainerDied","Data":"7e5cf67eb04d4d25d11ceade7557e97821ecc3982ba7391f3df97aa07b6ddf93"} Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.069001 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvh57" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.071595 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" event={"ID":"0d8139b6-0c9b-48cf-b664-44304568f2d1","Type":"ContainerStarted","Data":"56c4adce6eb3fbd22f6c1d19c280247c668afef7b965333cc565afa1b3511e94"} Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.071631 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" event={"ID":"0d8139b6-0c9b-48cf-b664-44304568f2d1","Type":"ContainerStarted","Data":"981d97c8739abab24268bebbb0c098b2fc381e250fac6b0ee012b6d55a755e47"} Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.072414 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.073440 4851 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jxzb2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.073471 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" podUID="0d8139b6-0c9b-48cf-b664-44304568f2d1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.076807 4851 scope.go:117] "RemoveContainer" containerID="11a055a6ede8eff70c8c0984f4b14ed27e6b4d5b7369ba6e16e9f0fd07e0d546" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.106545 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mcn2z"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.109980 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mcn2z"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.116025 4851 scope.go:117] "RemoveContainer" containerID="b51fa7f2acd4562d3e294fb895884a9372bdb998d72546cf2cdc3f71226d5c37" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.125995 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" podStartSLOduration=2.125972501 podStartE2EDuration="2.125972501s" podCreationTimestamp="2026-02-23 13:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:15:37.118533038 +0000 UTC m=+491.800236726" watchObservedRunningTime="2026-02-23 13:15:37.125972501 +0000 UTC m=+491.807676199" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.142007 4851 scope.go:117] "RemoveContainer" containerID="740c1621bb4d17314c3ec5c38ff740e9d6dbeeab38efb232e64648e040f56950" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.147469 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vbb9"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.150084 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vbb9"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.158389 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vvh57"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.164238 4851 scope.go:117] "RemoveContainer" containerID="b5bb26c3a3c7cfff20d49516d71047d484299223ffc441a98081cc3ef3909758" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.165467 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vvh57"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.177721 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wwn4t"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.185109 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wwn4t"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.190229 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dpz2p"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.190674 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dpz2p"] Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.192693 4851 scope.go:117] "RemoveContainer" containerID="2f150688bc4bea6800eb3467ecd087f922a4c1fbb06d2b920e7e7fda00bccc59" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.206660 4851 scope.go:117] "RemoveContainer" containerID="b570034b7a043905cb826d937b5f0211d8f3661095b665ea6f58066190b82334" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.225206 4851 scope.go:117] "RemoveContainer" containerID="1ed6484c3a6e12210ddbca5e5f268a54d492b55d6d563c0d8af968e473d583e1" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.238076 4851 scope.go:117] "RemoveContainer" containerID="efb59b3ca00f442903861acb737b1d4fcfdfdcfbf5a145fd29cb250125ec7d71" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.248558 4851 scope.go:117] "RemoveContainer" containerID="1783f34cee5d7393a8b03b4c47271b2a2088cdc1eb8c8a6324b9b414f46be70a" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.259384 4851 scope.go:117] "RemoveContainer" containerID="0e412f72ce1da68caf5aedd67fd7c81eed4d2e40188ae60781d07ad5a28f6a25" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.271046 4851 scope.go:117] "RemoveContainer" containerID="669a3edb4607f8c2898150cffcc71a6644ec8fea7158eaa8083b939cc92e60ad" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.287791 4851 scope.go:117] "RemoveContainer" containerID="30a6b2219ae6d912a3bb124761b503d19c3ce095d8d1f74931550a1f0869e901" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.975114 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" path="/var/lib/kubelet/pods/01938f77-146f-4d3d-a8f6-d1d4673ad3d4/volumes" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.976186 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" path="/var/lib/kubelet/pods/4e65eda3-0eae-4672-9f18-c87148fcc449/volumes" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.976907 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" path="/var/lib/kubelet/pods/5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199/volumes" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.978158 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" path="/var/lib/kubelet/pods/c62cfc6b-827b-499f-a5c9-e8a1e89df8f4/volumes" Feb 23 13:15:37 crc kubenswrapper[4851]: I0223 13:15:37.978777 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" path="/var/lib/kubelet/pods/e618f7f4-c1f6-40cd-aa78-e0f711acd1b7/volumes" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.025750 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f2wl8"] Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.026888 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerName="extract-content" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.026911 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerName="extract-content" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.026934 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.026944 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.026955 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerName="extract-utilities" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.026963 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerName="extract-utilities" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.026976 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerName="extract-utilities" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.026983 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerName="extract-utilities" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.026993 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerName="marketplace-operator" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027000 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerName="marketplace-operator" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.027014 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerName="marketplace-operator" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027022 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerName="marketplace-operator" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.027031 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerName="extract-content" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027039 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerName="extract-content" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.027049 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027058 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.027071 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027079 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.027094 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerName="extract-utilities" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027106 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerName="extract-utilities" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.027120 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerName="extract-utilities" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027128 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerName="extract-utilities" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.027137 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerName="extract-content" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027144 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerName="extract-content" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.027153 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerName="extract-content" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027161 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerName="extract-content" Feb 23 13:15:38 crc kubenswrapper[4851]: E0223 13:15:38.027172 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027180 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027291 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="01938f77-146f-4d3d-a8f6-d1d4673ad3d4" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027305 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerName="marketplace-operator" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027316 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="e618f7f4-c1f6-40cd-aa78-e0f711acd1b7" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027347 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="c62cfc6b-827b-499f-a5c9-e8a1e89df8f4" containerName="marketplace-operator" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027359 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e65eda3-0eae-4672-9f18-c87148fcc449" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.027370 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e27ecc1-b0dc-4abb-aca6-58d1f5ff5199" containerName="registry-server" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.028307 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.030122 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.033511 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2wl8"] Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.093266 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jxzb2" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.224137 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mpp9t"] Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.225300 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.226465 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73698ae1-5cf2-41c6-99f8-0e943404b97f-catalog-content\") pod \"redhat-marketplace-f2wl8\" (UID: \"73698ae1-5cf2-41c6-99f8-0e943404b97f\") " pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.226560 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73698ae1-5cf2-41c6-99f8-0e943404b97f-utilities\") pod \"redhat-marketplace-f2wl8\" (UID: \"73698ae1-5cf2-41c6-99f8-0e943404b97f\") " pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.226654 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsngj\" (UniqueName: \"kubernetes.io/projected/73698ae1-5cf2-41c6-99f8-0e943404b97f-kube-api-access-zsngj\") pod \"redhat-marketplace-f2wl8\" (UID: \"73698ae1-5cf2-41c6-99f8-0e943404b97f\") " pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.227260 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.233135 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpp9t"] Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.328307 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9xnd\" (UniqueName: \"kubernetes.io/projected/5445da7a-b2cb-477c-99aa-e70e2f61dd70-kube-api-access-r9xnd\") pod \"certified-operators-mpp9t\" (UID: \"5445da7a-b2cb-477c-99aa-e70e2f61dd70\") " pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.328372 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5445da7a-b2cb-477c-99aa-e70e2f61dd70-catalog-content\") pod \"certified-operators-mpp9t\" (UID: \"5445da7a-b2cb-477c-99aa-e70e2f61dd70\") " pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.328506 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73698ae1-5cf2-41c6-99f8-0e943404b97f-catalog-content\") pod \"redhat-marketplace-f2wl8\" (UID: \"73698ae1-5cf2-41c6-99f8-0e943404b97f\") " pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.328540 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73698ae1-5cf2-41c6-99f8-0e943404b97f-utilities\") pod \"redhat-marketplace-f2wl8\" (UID: \"73698ae1-5cf2-41c6-99f8-0e943404b97f\") " pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.328578 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5445da7a-b2cb-477c-99aa-e70e2f61dd70-utilities\") pod \"certified-operators-mpp9t\" (UID: \"5445da7a-b2cb-477c-99aa-e70e2f61dd70\") " pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.328601 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsngj\" (UniqueName: \"kubernetes.io/projected/73698ae1-5cf2-41c6-99f8-0e943404b97f-kube-api-access-zsngj\") pod \"redhat-marketplace-f2wl8\" (UID: \"73698ae1-5cf2-41c6-99f8-0e943404b97f\") " pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.328968 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73698ae1-5cf2-41c6-99f8-0e943404b97f-utilities\") pod \"redhat-marketplace-f2wl8\" (UID: \"73698ae1-5cf2-41c6-99f8-0e943404b97f\") " pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.329102 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73698ae1-5cf2-41c6-99f8-0e943404b97f-catalog-content\") pod \"redhat-marketplace-f2wl8\" (UID: \"73698ae1-5cf2-41c6-99f8-0e943404b97f\") " pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.347569 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsngj\" (UniqueName: \"kubernetes.io/projected/73698ae1-5cf2-41c6-99f8-0e943404b97f-kube-api-access-zsngj\") pod \"redhat-marketplace-f2wl8\" (UID: \"73698ae1-5cf2-41c6-99f8-0e943404b97f\") " pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.386061 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.430176 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5445da7a-b2cb-477c-99aa-e70e2f61dd70-utilities\") pod \"certified-operators-mpp9t\" (UID: \"5445da7a-b2cb-477c-99aa-e70e2f61dd70\") " pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.430234 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9xnd\" (UniqueName: \"kubernetes.io/projected/5445da7a-b2cb-477c-99aa-e70e2f61dd70-kube-api-access-r9xnd\") pod \"certified-operators-mpp9t\" (UID: \"5445da7a-b2cb-477c-99aa-e70e2f61dd70\") " pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.430265 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5445da7a-b2cb-477c-99aa-e70e2f61dd70-catalog-content\") pod \"certified-operators-mpp9t\" (UID: \"5445da7a-b2cb-477c-99aa-e70e2f61dd70\") " pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.430805 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5445da7a-b2cb-477c-99aa-e70e2f61dd70-catalog-content\") pod \"certified-operators-mpp9t\" (UID: \"5445da7a-b2cb-477c-99aa-e70e2f61dd70\") " pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.430946 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5445da7a-b2cb-477c-99aa-e70e2f61dd70-utilities\") pod \"certified-operators-mpp9t\" (UID: \"5445da7a-b2cb-477c-99aa-e70e2f61dd70\") " pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.453823 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9xnd\" (UniqueName: \"kubernetes.io/projected/5445da7a-b2cb-477c-99aa-e70e2f61dd70-kube-api-access-r9xnd\") pod \"certified-operators-mpp9t\" (UID: \"5445da7a-b2cb-477c-99aa-e70e2f61dd70\") " pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.544188 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.758248 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2wl8"] Feb 23 13:15:38 crc kubenswrapper[4851]: W0223 13:15:38.766637 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73698ae1_5cf2_41c6_99f8_0e943404b97f.slice/crio-7dea803cb80cc3de26ed26ab6c0b585df4ea6eaadec49daf3e836965b89247dd WatchSource:0}: Error finding container 7dea803cb80cc3de26ed26ab6c0b585df4ea6eaadec49daf3e836965b89247dd: Status 404 returned error can't find the container with id 7dea803cb80cc3de26ed26ab6c0b585df4ea6eaadec49daf3e836965b89247dd Feb 23 13:15:38 crc kubenswrapper[4851]: I0223 13:15:38.917137 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpp9t"] Feb 23 13:15:38 crc kubenswrapper[4851]: W0223 13:15:38.967628 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5445da7a_b2cb_477c_99aa_e70e2f61dd70.slice/crio-72d222230955baef1c3e1b6ec75b3e86926f471a9dd23757e2fc5abd0a9cbce3 WatchSource:0}: Error finding container 72d222230955baef1c3e1b6ec75b3e86926f471a9dd23757e2fc5abd0a9cbce3: Status 404 returned error can't find the container with id 72d222230955baef1c3e1b6ec75b3e86926f471a9dd23757e2fc5abd0a9cbce3 Feb 23 13:15:39 crc kubenswrapper[4851]: I0223 13:15:39.096413 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp9t" event={"ID":"5445da7a-b2cb-477c-99aa-e70e2f61dd70","Type":"ContainerStarted","Data":"e9e3f929608af70b2f0e7de6041f0315967d86dc4777f2d87f2f10e7f2b5b4dd"} Feb 23 13:15:39 crc kubenswrapper[4851]: I0223 13:15:39.096549 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp9t" event={"ID":"5445da7a-b2cb-477c-99aa-e70e2f61dd70","Type":"ContainerStarted","Data":"72d222230955baef1c3e1b6ec75b3e86926f471a9dd23757e2fc5abd0a9cbce3"} Feb 23 13:15:39 crc kubenswrapper[4851]: I0223 13:15:39.098227 4851 generic.go:334] "Generic (PLEG): container finished" podID="73698ae1-5cf2-41c6-99f8-0e943404b97f" containerID="135f7eeca664c18825c9f8ab5482e4784cefc4d570ba2dba5e81997cd10229a8" exitCode=0 Feb 23 13:15:39 crc kubenswrapper[4851]: I0223 13:15:39.099050 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2wl8" event={"ID":"73698ae1-5cf2-41c6-99f8-0e943404b97f","Type":"ContainerDied","Data":"135f7eeca664c18825c9f8ab5482e4784cefc4d570ba2dba5e81997cd10229a8"} Feb 23 13:15:39 crc kubenswrapper[4851]: I0223 13:15:39.099088 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2wl8" event={"ID":"73698ae1-5cf2-41c6-99f8-0e943404b97f","Type":"ContainerStarted","Data":"7dea803cb80cc3de26ed26ab6c0b585df4ea6eaadec49daf3e836965b89247dd"} Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.104806 4851 generic.go:334] "Generic (PLEG): container finished" podID="5445da7a-b2cb-477c-99aa-e70e2f61dd70" containerID="e9e3f929608af70b2f0e7de6041f0315967d86dc4777f2d87f2f10e7f2b5b4dd" exitCode=0 Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.104867 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp9t" event={"ID":"5445da7a-b2cb-477c-99aa-e70e2f61dd70","Type":"ContainerDied","Data":"e9e3f929608af70b2f0e7de6041f0315967d86dc4777f2d87f2f10e7f2b5b4dd"} Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.426145 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dwcng"] Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.427921 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.430032 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.437613 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwcng"] Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.561965 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkzj\" (UniqueName: \"kubernetes.io/projected/81924658-5ad1-41ab-ac76-c807fc665048-kube-api-access-rjkzj\") pod \"redhat-operators-dwcng\" (UID: \"81924658-5ad1-41ab-ac76-c807fc665048\") " pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.562041 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81924658-5ad1-41ab-ac76-c807fc665048-utilities\") pod \"redhat-operators-dwcng\" (UID: \"81924658-5ad1-41ab-ac76-c807fc665048\") " pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.562080 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81924658-5ad1-41ab-ac76-c807fc665048-catalog-content\") pod \"redhat-operators-dwcng\" (UID: \"81924658-5ad1-41ab-ac76-c807fc665048\") " pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.622213 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rr62g"] Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.623115 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.625424 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.633402 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rr62g"] Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.663087 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81924658-5ad1-41ab-ac76-c807fc665048-utilities\") pod \"redhat-operators-dwcng\" (UID: \"81924658-5ad1-41ab-ac76-c807fc665048\") " pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.663157 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81924658-5ad1-41ab-ac76-c807fc665048-catalog-content\") pod \"redhat-operators-dwcng\" (UID: \"81924658-5ad1-41ab-ac76-c807fc665048\") " pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.663190 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjkzj\" (UniqueName: \"kubernetes.io/projected/81924658-5ad1-41ab-ac76-c807fc665048-kube-api-access-rjkzj\") pod \"redhat-operators-dwcng\" (UID: \"81924658-5ad1-41ab-ac76-c807fc665048\") " pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.663843 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81924658-5ad1-41ab-ac76-c807fc665048-utilities\") pod \"redhat-operators-dwcng\" (UID: \"81924658-5ad1-41ab-ac76-c807fc665048\") " pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.664628 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81924658-5ad1-41ab-ac76-c807fc665048-catalog-content\") pod \"redhat-operators-dwcng\" (UID: \"81924658-5ad1-41ab-ac76-c807fc665048\") " pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.684291 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjkzj\" (UniqueName: \"kubernetes.io/projected/81924658-5ad1-41ab-ac76-c807fc665048-kube-api-access-rjkzj\") pod \"redhat-operators-dwcng\" (UID: \"81924658-5ad1-41ab-ac76-c807fc665048\") " pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.764709 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqqjn\" (UniqueName: \"kubernetes.io/projected/899927dd-1984-4973-94f5-e53fac8948ab-kube-api-access-fqqjn\") pod \"community-operators-rr62g\" (UID: \"899927dd-1984-4973-94f5-e53fac8948ab\") " pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.764751 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899927dd-1984-4973-94f5-e53fac8948ab-utilities\") pod \"community-operators-rr62g\" (UID: \"899927dd-1984-4973-94f5-e53fac8948ab\") " pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.764790 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899927dd-1984-4973-94f5-e53fac8948ab-catalog-content\") pod \"community-operators-rr62g\" (UID: \"899927dd-1984-4973-94f5-e53fac8948ab\") " pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.837616 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.865680 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqqjn\" (UniqueName: \"kubernetes.io/projected/899927dd-1984-4973-94f5-e53fac8948ab-kube-api-access-fqqjn\") pod \"community-operators-rr62g\" (UID: \"899927dd-1984-4973-94f5-e53fac8948ab\") " pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.865725 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899927dd-1984-4973-94f5-e53fac8948ab-utilities\") pod \"community-operators-rr62g\" (UID: \"899927dd-1984-4973-94f5-e53fac8948ab\") " pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.865763 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899927dd-1984-4973-94f5-e53fac8948ab-catalog-content\") pod \"community-operators-rr62g\" (UID: \"899927dd-1984-4973-94f5-e53fac8948ab\") " pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.866647 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899927dd-1984-4973-94f5-e53fac8948ab-utilities\") pod \"community-operators-rr62g\" (UID: \"899927dd-1984-4973-94f5-e53fac8948ab\") " pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.866670 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899927dd-1984-4973-94f5-e53fac8948ab-catalog-content\") pod \"community-operators-rr62g\" (UID: \"899927dd-1984-4973-94f5-e53fac8948ab\") " pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.886788 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqqjn\" (UniqueName: \"kubernetes.io/projected/899927dd-1984-4973-94f5-e53fac8948ab-kube-api-access-fqqjn\") pod \"community-operators-rr62g\" (UID: \"899927dd-1984-4973-94f5-e53fac8948ab\") " pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:40 crc kubenswrapper[4851]: I0223 13:15:40.941847 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:41 crc kubenswrapper[4851]: I0223 13:15:41.111965 4851 generic.go:334] "Generic (PLEG): container finished" podID="5445da7a-b2cb-477c-99aa-e70e2f61dd70" containerID="f8f950b637eb954e215a58abc5b6ca4af285ffa2b8a0d05ed3f7424058644888" exitCode=0 Feb 23 13:15:41 crc kubenswrapper[4851]: I0223 13:15:41.112058 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp9t" event={"ID":"5445da7a-b2cb-477c-99aa-e70e2f61dd70","Type":"ContainerDied","Data":"f8f950b637eb954e215a58abc5b6ca4af285ffa2b8a0d05ed3f7424058644888"} Feb 23 13:15:41 crc kubenswrapper[4851]: I0223 13:15:41.119247 4851 generic.go:334] "Generic (PLEG): container finished" podID="73698ae1-5cf2-41c6-99f8-0e943404b97f" containerID="145d71dbce24d36981e25aa68671efdd5b8e10ff093ca8c6b3a0d8438f8b358e" exitCode=0 Feb 23 13:15:41 crc kubenswrapper[4851]: I0223 13:15:41.119283 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2wl8" event={"ID":"73698ae1-5cf2-41c6-99f8-0e943404b97f","Type":"ContainerDied","Data":"145d71dbce24d36981e25aa68671efdd5b8e10ff093ca8c6b3a0d8438f8b358e"} Feb 23 13:15:41 crc kubenswrapper[4851]: I0223 13:15:41.261228 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwcng"] Feb 23 13:15:41 crc kubenswrapper[4851]: W0223 13:15:41.268784 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81924658_5ad1_41ab_ac76_c807fc665048.slice/crio-7c94f2dfe0079339fa6bab6508e17a4b507e5e34526a598293b608bff956862d WatchSource:0}: Error finding container 7c94f2dfe0079339fa6bab6508e17a4b507e5e34526a598293b608bff956862d: Status 404 returned error can't find the container with id 7c94f2dfe0079339fa6bab6508e17a4b507e5e34526a598293b608bff956862d Feb 23 13:15:41 crc kubenswrapper[4851]: I0223 13:15:41.341244 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rr62g"] Feb 23 13:15:41 crc kubenswrapper[4851]: W0223 13:15:41.369567 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899927dd_1984_4973_94f5_e53fac8948ab.slice/crio-50dc7476bf1d611fa2e4acf9fd3dc9d9eb358dc0079176d4e0b689fc9dd61449 WatchSource:0}: Error finding container 50dc7476bf1d611fa2e4acf9fd3dc9d9eb358dc0079176d4e0b689fc9dd61449: Status 404 returned error can't find the container with id 50dc7476bf1d611fa2e4acf9fd3dc9d9eb358dc0079176d4e0b689fc9dd61449 Feb 23 13:15:41 crc kubenswrapper[4851]: I0223 13:15:41.926002 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:15:41 crc kubenswrapper[4851]: I0223 13:15:41.926142 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.126516 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpp9t" event={"ID":"5445da7a-b2cb-477c-99aa-e70e2f61dd70","Type":"ContainerStarted","Data":"b19b7d35361b9c2d395dab56b1ce3e4ce22329ad159f6c434617bebfb5e34171"} Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.128804 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2wl8" event={"ID":"73698ae1-5cf2-41c6-99f8-0e943404b97f","Type":"ContainerStarted","Data":"3728d8596f0e9342fa564d311c8fbbdcdc86fe61804fb3a16783259eb5c37a78"} Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.130230 4851 generic.go:334] "Generic (PLEG): container finished" podID="899927dd-1984-4973-94f5-e53fac8948ab" containerID="d18ff6f9127358d2e17c6d6dbe68fc34139ceec3f364c41b9c0b2cfaa1f54447" exitCode=0 Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.130302 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr62g" event={"ID":"899927dd-1984-4973-94f5-e53fac8948ab","Type":"ContainerDied","Data":"d18ff6f9127358d2e17c6d6dbe68fc34139ceec3f364c41b9c0b2cfaa1f54447"} Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.130345 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr62g" event={"ID":"899927dd-1984-4973-94f5-e53fac8948ab","Type":"ContainerStarted","Data":"50dc7476bf1d611fa2e4acf9fd3dc9d9eb358dc0079176d4e0b689fc9dd61449"} Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.132567 4851 generic.go:334] "Generic (PLEG): container finished" podID="81924658-5ad1-41ab-ac76-c807fc665048" containerID="6169a34ff87fbab385edd2482e5b227e41b77ce5559d67454c88e227a9024d16" exitCode=0 Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.132621 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwcng" event={"ID":"81924658-5ad1-41ab-ac76-c807fc665048","Type":"ContainerDied","Data":"6169a34ff87fbab385edd2482e5b227e41b77ce5559d67454c88e227a9024d16"} Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.132639 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwcng" event={"ID":"81924658-5ad1-41ab-ac76-c807fc665048","Type":"ContainerStarted","Data":"7c94f2dfe0079339fa6bab6508e17a4b507e5e34526a598293b608bff956862d"} Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.179386 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mpp9t" podStartSLOduration=2.777939912 podStartE2EDuration="4.179371318s" podCreationTimestamp="2026-02-23 13:15:38 +0000 UTC" firstStartedPulling="2026-02-23 13:15:40.109109069 +0000 UTC m=+494.790812747" lastFinishedPulling="2026-02-23 13:15:41.510540475 +0000 UTC m=+496.192244153" observedRunningTime="2026-02-23 13:15:42.155067054 +0000 UTC m=+496.836770742" watchObservedRunningTime="2026-02-23 13:15:42.179371318 +0000 UTC m=+496.861074996" Feb 23 13:15:42 crc kubenswrapper[4851]: I0223 13:15:42.213121 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f2wl8" podStartSLOduration=1.823750892 podStartE2EDuration="4.213107291s" podCreationTimestamp="2026-02-23 13:15:38 +0000 UTC" firstStartedPulling="2026-02-23 13:15:39.100538427 +0000 UTC m=+493.782242105" lastFinishedPulling="2026-02-23 13:15:41.489894826 +0000 UTC m=+496.171598504" observedRunningTime="2026-02-23 13:15:42.21167569 +0000 UTC m=+496.893379378" watchObservedRunningTime="2026-02-23 13:15:42.213107291 +0000 UTC m=+496.894810969" Feb 23 13:15:43 crc kubenswrapper[4851]: I0223 13:15:43.141624 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr62g" event={"ID":"899927dd-1984-4973-94f5-e53fac8948ab","Type":"ContainerStarted","Data":"919a6677219eee7052ba77e01437c1ef3e4c9ac2e916902a34b548adc0faa3ab"} Feb 23 13:15:44 crc kubenswrapper[4851]: I0223 13:15:44.149117 4851 generic.go:334] "Generic (PLEG): container finished" podID="899927dd-1984-4973-94f5-e53fac8948ab" containerID="919a6677219eee7052ba77e01437c1ef3e4c9ac2e916902a34b548adc0faa3ab" exitCode=0 Feb 23 13:15:44 crc kubenswrapper[4851]: I0223 13:15:44.149177 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr62g" event={"ID":"899927dd-1984-4973-94f5-e53fac8948ab","Type":"ContainerDied","Data":"919a6677219eee7052ba77e01437c1ef3e4c9ac2e916902a34b548adc0faa3ab"} Feb 23 13:15:44 crc kubenswrapper[4851]: I0223 13:15:44.152497 4851 generic.go:334] "Generic (PLEG): container finished" podID="81924658-5ad1-41ab-ac76-c807fc665048" containerID="04beb7f9a0eade08480494b28b541f71d5e94110db638d1587ad4d21bbde566a" exitCode=0 Feb 23 13:15:44 crc kubenswrapper[4851]: I0223 13:15:44.152534 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwcng" event={"ID":"81924658-5ad1-41ab-ac76-c807fc665048","Type":"ContainerDied","Data":"04beb7f9a0eade08480494b28b541f71d5e94110db638d1587ad4d21bbde566a"} Feb 23 13:15:45 crc kubenswrapper[4851]: I0223 13:15:45.160119 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rr62g" event={"ID":"899927dd-1984-4973-94f5-e53fac8948ab","Type":"ContainerStarted","Data":"020fdf2e395123f4a3fcc8b6c8054278e865c62d357ce06d42c327b692dc053a"} Feb 23 13:15:45 crc kubenswrapper[4851]: I0223 13:15:45.164600 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwcng" event={"ID":"81924658-5ad1-41ab-ac76-c807fc665048","Type":"ContainerStarted","Data":"8284d7ca41da5a82bdbd0b2d0bd3a05208279cb628c0d2f79ada8d319f8a2356"} Feb 23 13:15:45 crc kubenswrapper[4851]: I0223 13:15:45.205664 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rr62g" podStartSLOduration=2.804760291 podStartE2EDuration="5.205639548s" podCreationTimestamp="2026-02-23 13:15:40 +0000 UTC" firstStartedPulling="2026-02-23 13:15:42.131695447 +0000 UTC m=+496.813399125" lastFinishedPulling="2026-02-23 13:15:44.532574694 +0000 UTC m=+499.214278382" observedRunningTime="2026-02-23 13:15:45.185622026 +0000 UTC m=+499.867325724" watchObservedRunningTime="2026-02-23 13:15:45.205639548 +0000 UTC m=+499.887343246" Feb 23 13:15:45 crc kubenswrapper[4851]: I0223 13:15:45.208871 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dwcng" podStartSLOduration=2.814467728 podStartE2EDuration="5.20886087s" podCreationTimestamp="2026-02-23 13:15:40 +0000 UTC" firstStartedPulling="2026-02-23 13:15:42.133835318 +0000 UTC m=+496.815538996" lastFinishedPulling="2026-02-23 13:15:44.52822846 +0000 UTC m=+499.209932138" observedRunningTime="2026-02-23 13:15:45.20257614 +0000 UTC m=+499.884279828" watchObservedRunningTime="2026-02-23 13:15:45.20886087 +0000 UTC m=+499.890564548" Feb 23 13:15:48 crc kubenswrapper[4851]: I0223 13:15:48.386934 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:48 crc kubenswrapper[4851]: I0223 13:15:48.387515 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:48 crc kubenswrapper[4851]: I0223 13:15:48.448840 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:48 crc kubenswrapper[4851]: I0223 13:15:48.544830 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:48 crc kubenswrapper[4851]: I0223 13:15:48.544916 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:48 crc kubenswrapper[4851]: I0223 13:15:48.579262 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:49 crc kubenswrapper[4851]: I0223 13:15:49.216484 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mpp9t" Feb 23 13:15:49 crc kubenswrapper[4851]: I0223 13:15:49.217642 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f2wl8" Feb 23 13:15:50 crc kubenswrapper[4851]: I0223 13:15:50.838636 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:50 crc kubenswrapper[4851]: I0223 13:15:50.839018 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:50 crc kubenswrapper[4851]: I0223 13:15:50.877168 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:50 crc kubenswrapper[4851]: I0223 13:15:50.942431 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:50 crc kubenswrapper[4851]: I0223 13:15:50.942495 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:50 crc kubenswrapper[4851]: I0223 13:15:50.978429 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:15:51 crc kubenswrapper[4851]: I0223 13:15:51.225449 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dwcng" Feb 23 13:15:51 crc kubenswrapper[4851]: I0223 13:15:51.230147 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rr62g" Feb 23 13:16:11 crc kubenswrapper[4851]: I0223 13:16:11.925378 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:16:11 crc kubenswrapper[4851]: I0223 13:16:11.926252 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:16:11 crc kubenswrapper[4851]: I0223 13:16:11.926299 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:16:11 crc kubenswrapper[4851]: I0223 13:16:11.926948 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a4fffcd1de0aee50b0d802adbc8f5e9c57018a083d0f79caa2e97709f627f3e"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:16:11 crc kubenswrapper[4851]: I0223 13:16:11.927065 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://7a4fffcd1de0aee50b0d802adbc8f5e9c57018a083d0f79caa2e97709f627f3e" gracePeriod=600 Feb 23 13:16:12 crc kubenswrapper[4851]: I0223 13:16:12.306511 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="7a4fffcd1de0aee50b0d802adbc8f5e9c57018a083d0f79caa2e97709f627f3e" exitCode=0 Feb 23 13:16:12 crc kubenswrapper[4851]: I0223 13:16:12.306553 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"7a4fffcd1de0aee50b0d802adbc8f5e9c57018a083d0f79caa2e97709f627f3e"} Feb 23 13:16:12 crc kubenswrapper[4851]: I0223 13:16:12.306580 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"99b977156f80b246ca5cd408d8663acbebd25daf86a562bb337535d76fe02c36"} Feb 23 13:16:12 crc kubenswrapper[4851]: I0223 13:16:12.306595 4851 scope.go:117] "RemoveContainer" containerID="a39796e37d7c46747743408ee115fc38d96faf5b9f64d05a5b6e261756d05626" Feb 23 13:18:41 crc kubenswrapper[4851]: I0223 13:18:41.925322 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:18:41 crc kubenswrapper[4851]: I0223 13:18:41.925825 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:19:11 crc kubenswrapper[4851]: I0223 13:19:11.925274 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:19:11 crc kubenswrapper[4851]: I0223 13:19:11.925675 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:19:41 crc kubenswrapper[4851]: I0223 13:19:41.925070 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:19:41 crc kubenswrapper[4851]: I0223 13:19:41.925532 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:19:41 crc kubenswrapper[4851]: I0223 13:19:41.925577 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:19:41 crc kubenswrapper[4851]: I0223 13:19:41.926147 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99b977156f80b246ca5cd408d8663acbebd25daf86a562bb337535d76fe02c36"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:19:41 crc kubenswrapper[4851]: I0223 13:19:41.926198 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://99b977156f80b246ca5cd408d8663acbebd25daf86a562bb337535d76fe02c36" gracePeriod=600 Feb 23 13:19:42 crc kubenswrapper[4851]: I0223 13:19:42.719846 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="99b977156f80b246ca5cd408d8663acbebd25daf86a562bb337535d76fe02c36" exitCode=0 Feb 23 13:19:42 crc kubenswrapper[4851]: I0223 13:19:42.720001 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"99b977156f80b246ca5cd408d8663acbebd25daf86a562bb337535d76fe02c36"} Feb 23 13:19:42 crc kubenswrapper[4851]: I0223 13:19:42.720692 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"1b56c77e1de63323e9342dc73ec952ed4e450a54675ac5d33629ae895364039c"} Feb 23 13:19:42 crc kubenswrapper[4851]: I0223 13:19:42.720716 4851 scope.go:117] "RemoveContainer" containerID="7a4fffcd1de0aee50b0d802adbc8f5e9c57018a083d0f79caa2e97709f627f3e" Feb 23 13:20:23 crc kubenswrapper[4851]: I0223 13:20:23.997362 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn"] Feb 23 13:20:23 crc kubenswrapper[4851]: I0223 13:20:23.998624 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.000360 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.000540 4851 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-9n699" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.000943 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.008957 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-hrfw4"] Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.009587 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hrfw4" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.011984 4851 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lnnht" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.014412 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn"] Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.026847 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k2vvq"] Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.027855 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.030256 4851 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jdsqd" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.031975 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hrfw4"] Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.045707 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k2vvq"] Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.110944 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8lk\" (UniqueName: \"kubernetes.io/projected/4e7d2a84-a59b-4489-98c0-78b2b3dc607c-kube-api-access-zc8lk\") pod \"cert-manager-858654f9db-hrfw4\" (UID: \"4e7d2a84-a59b-4489-98c0-78b2b3dc607c\") " pod="cert-manager/cert-manager-858654f9db-hrfw4" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.111025 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb6rc\" (UniqueName: \"kubernetes.io/projected/af206ef2-9ee3-4eeb-81c6-5a82bef57eb0-kube-api-access-qb6rc\") pod \"cert-manager-cainjector-cf98fcc89-qwxvn\" (UID: \"af206ef2-9ee3-4eeb-81c6-5a82bef57eb0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.111051 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skfq5\" (UniqueName: \"kubernetes.io/projected/b8e00a19-b1f6-4672-84e7-cc8abd468123-kube-api-access-skfq5\") pod \"cert-manager-webhook-687f57d79b-k2vvq\" (UID: \"b8e00a19-b1f6-4672-84e7-cc8abd468123\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.211858 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skfq5\" (UniqueName: \"kubernetes.io/projected/b8e00a19-b1f6-4672-84e7-cc8abd468123-kube-api-access-skfq5\") pod \"cert-manager-webhook-687f57d79b-k2vvq\" (UID: \"b8e00a19-b1f6-4672-84e7-cc8abd468123\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.211929 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc8lk\" (UniqueName: \"kubernetes.io/projected/4e7d2a84-a59b-4489-98c0-78b2b3dc607c-kube-api-access-zc8lk\") pod \"cert-manager-858654f9db-hrfw4\" (UID: \"4e7d2a84-a59b-4489-98c0-78b2b3dc607c\") " pod="cert-manager/cert-manager-858654f9db-hrfw4" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.211984 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb6rc\" (UniqueName: \"kubernetes.io/projected/af206ef2-9ee3-4eeb-81c6-5a82bef57eb0-kube-api-access-qb6rc\") pod \"cert-manager-cainjector-cf98fcc89-qwxvn\" (UID: \"af206ef2-9ee3-4eeb-81c6-5a82bef57eb0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.234587 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skfq5\" (UniqueName: \"kubernetes.io/projected/b8e00a19-b1f6-4672-84e7-cc8abd468123-kube-api-access-skfq5\") pod \"cert-manager-webhook-687f57d79b-k2vvq\" (UID: \"b8e00a19-b1f6-4672-84e7-cc8abd468123\") " pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.234600 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc8lk\" (UniqueName: \"kubernetes.io/projected/4e7d2a84-a59b-4489-98c0-78b2b3dc607c-kube-api-access-zc8lk\") pod \"cert-manager-858654f9db-hrfw4\" (UID: \"4e7d2a84-a59b-4489-98c0-78b2b3dc607c\") " pod="cert-manager/cert-manager-858654f9db-hrfw4" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.234607 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb6rc\" (UniqueName: \"kubernetes.io/projected/af206ef2-9ee3-4eeb-81c6-5a82bef57eb0-kube-api-access-qb6rc\") pod \"cert-manager-cainjector-cf98fcc89-qwxvn\" (UID: \"af206ef2-9ee3-4eeb-81c6-5a82bef57eb0\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.320986 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.331443 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-hrfw4" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.347137 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.804890 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-hrfw4"] Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.812575 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-k2vvq"] Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.817139 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.818662 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn"] Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.963205 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" event={"ID":"b8e00a19-b1f6-4672-84e7-cc8abd468123","Type":"ContainerStarted","Data":"b9f2c3bada9c7e587fffe3edcc1581613ed471aeeac5dccde99e9dbeb681c9ba"} Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.964047 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hrfw4" event={"ID":"4e7d2a84-a59b-4489-98c0-78b2b3dc607c","Type":"ContainerStarted","Data":"45c5040059be454bc6dfa79aaa4a84693bebffeea2a55624947c8c0ea7b4ca9f"} Feb 23 13:20:24 crc kubenswrapper[4851]: I0223 13:20:24.964879 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn" event={"ID":"af206ef2-9ee3-4eeb-81c6-5a82bef57eb0","Type":"ContainerStarted","Data":"50dc5f7880d5c72d88f18e6d6528496d43fc6bd0421c29df551dc034de62aade"} Feb 23 13:20:27 crc kubenswrapper[4851]: I0223 13:20:27.985853 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn" event={"ID":"af206ef2-9ee3-4eeb-81c6-5a82bef57eb0","Type":"ContainerStarted","Data":"d0ae99bc8df26617ad28bd359f140cdbf35f31f05f9acc61ce693ef59dcd130a"} Feb 23 13:20:28 crc kubenswrapper[4851]: I0223 13:20:28.002484 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-qwxvn" podStartSLOduration=2.542440177 podStartE2EDuration="5.002464352s" podCreationTimestamp="2026-02-23 13:20:23 +0000 UTC" firstStartedPulling="2026-02-23 13:20:24.817542548 +0000 UTC m=+779.499246226" lastFinishedPulling="2026-02-23 13:20:27.277566723 +0000 UTC m=+781.959270401" observedRunningTime="2026-02-23 13:20:28.001297999 +0000 UTC m=+782.683001697" watchObservedRunningTime="2026-02-23 13:20:28.002464352 +0000 UTC m=+782.684168030" Feb 23 13:20:28 crc kubenswrapper[4851]: I0223 13:20:28.991835 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" event={"ID":"b8e00a19-b1f6-4672-84e7-cc8abd468123","Type":"ContainerStarted","Data":"c91f8a34ace612efd4e38d21a2b6fb61536303a96f798646928d521baf698084"} Feb 23 13:20:28 crc kubenswrapper[4851]: I0223 13:20:28.992377 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" Feb 23 13:20:28 crc kubenswrapper[4851]: I0223 13:20:28.994272 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-hrfw4" event={"ID":"4e7d2a84-a59b-4489-98c0-78b2b3dc607c","Type":"ContainerStarted","Data":"4170c8cb03acad6fd8ae765f9a6add5083993ca5397008acc480a394a2530ce8"} Feb 23 13:20:29 crc kubenswrapper[4851]: I0223 13:20:29.011579 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" podStartSLOduration=1.212933019 podStartE2EDuration="5.011563347s" podCreationTimestamp="2026-02-23 13:20:24 +0000 UTC" firstStartedPulling="2026-02-23 13:20:24.823356483 +0000 UTC m=+779.505060161" lastFinishedPulling="2026-02-23 13:20:28.621986811 +0000 UTC m=+783.303690489" observedRunningTime="2026-02-23 13:20:29.010379584 +0000 UTC m=+783.692083282" watchObservedRunningTime="2026-02-23 13:20:29.011563347 +0000 UTC m=+783.693267025" Feb 23 13:20:29 crc kubenswrapper[4851]: I0223 13:20:29.026692 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-hrfw4" podStartSLOduration=2.217814166 podStartE2EDuration="6.026672644s" podCreationTimestamp="2026-02-23 13:20:23 +0000 UTC" firstStartedPulling="2026-02-23 13:20:24.81691061 +0000 UTC m=+779.498614288" lastFinishedPulling="2026-02-23 13:20:28.625769088 +0000 UTC m=+783.307472766" observedRunningTime="2026-02-23 13:20:29.023788043 +0000 UTC m=+783.705491741" watchObservedRunningTime="2026-02-23 13:20:29.026672644 +0000 UTC m=+783.708376332" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.168279 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n9df6"] Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.168949 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovn-controller" containerID="cri-o://8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983" gracePeriod=30 Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.169028 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="nbdb" containerID="cri-o://6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd" gracePeriod=30 Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.169121 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="sbdb" containerID="cri-o://debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c" gracePeriod=30 Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.169056 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovn-acl-logging" containerID="cri-o://44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7" gracePeriod=30 Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.169257 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896" gracePeriod=30 Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.169203 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kube-rbac-proxy-node" containerID="cri-o://44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5" gracePeriod=30 Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.169252 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="northd" containerID="cri-o://17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4" gracePeriod=30 Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.206136 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" containerID="cri-o://28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2" gracePeriod=30 Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.359867 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-k2vvq" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.514824 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/3.log" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.516744 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovn-acl-logging/0.log" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.517230 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovn-controller/0.log" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.517655 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.562917 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g77w4"] Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563099 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563110 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563117 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovn-acl-logging" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563123 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovn-acl-logging" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563130 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="sbdb" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563136 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="sbdb" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563147 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kube-rbac-proxy-node" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563153 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kube-rbac-proxy-node" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563160 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="northd" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563165 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="northd" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563175 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563181 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563189 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="nbdb" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563194 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="nbdb" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563200 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563207 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563216 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovn-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563222 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovn-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563232 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563238 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563245 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kubecfg-setup" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563251 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kubecfg-setup" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563261 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563267 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563377 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="northd" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563385 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovn-acl-logging" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563393 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563403 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563412 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="nbdb" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563420 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="kube-rbac-proxy-node" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563428 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563434 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563441 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563449 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="sbdb" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563458 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovn-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: E0223 13:20:34.563544 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563551 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.563632 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerName="ovnkube-controller" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.564992 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.656821 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-env-overrides\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.656876 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzppz\" (UniqueName: \"kubernetes.io/projected/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-kube-api-access-wzppz\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.656897 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-netd\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.656916 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovn-node-metrics-cert\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.656943 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-script-lib\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.656968 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-log-socket\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.656986 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-systemd\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657002 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-netns\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657009 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657023 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657050 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657088 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-ovn\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657118 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-bin\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657143 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-ovn-kubernetes\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657163 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-var-lib-openvswitch\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657176 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-slash\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657190 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-kubelet\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657202 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-systemd-units\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657215 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-etc-openvswitch\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657231 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-node-log\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657247 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-openvswitch\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657283 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-config\") pod \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\" (UID: \"4c1929e0-6878-4572-b6d1-3a6dd8e2c291\") " Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657523 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-slash" (OuterVolumeSpecName: "host-slash") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657535 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-log-socket" (OuterVolumeSpecName: "log-socket") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657596 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657608 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657630 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657649 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657680 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-node-log" (OuterVolumeSpecName: "node-log") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657667 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657699 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657743 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657747 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657733 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657480 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-slash\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658098 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-run-ovn\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658109 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658130 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-run-openvswitch\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658182 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-run-netns\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.657579 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658203 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ktq4\" (UniqueName: \"kubernetes.io/projected/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-kube-api-access-5ktq4\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658216 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658285 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-cni-netd\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658376 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-ovn-node-metrics-cert\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658408 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-run-systemd\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658429 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-var-lib-openvswitch\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658449 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-ovnkube-config\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658556 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-cni-bin\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658609 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-run-ovn-kubernetes\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658646 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-systemd-units\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658678 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-kubelet\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658717 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-log-socket\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658751 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-etc-openvswitch\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658790 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-node-log\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658819 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658859 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-ovnkube-script-lib\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658890 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-env-overrides\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658966 4851 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.658984 4851 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-slash\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659001 4851 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659017 4851 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659033 4851 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659047 4851 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-node-log\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659063 4851 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659092 4851 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659110 4851 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659122 4851 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659134 4851 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659145 4851 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-log-socket\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659155 4851 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659166 4851 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659178 4851 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659188 4851 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.659198 4851 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.662199 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.662239 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-kube-api-access-wzppz" (OuterVolumeSpecName: "kube-api-access-wzppz") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "kube-api-access-wzppz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.669306 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4c1929e0-6878-4572-b6d1-3a6dd8e2c291" (UID: "4c1929e0-6878-4572-b6d1-3a6dd8e2c291"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760008 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-run-systemd\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760058 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-ovn-node-metrics-cert\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760080 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-var-lib-openvswitch\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760098 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-ovnkube-config\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760122 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-cni-bin\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760143 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-run-ovn-kubernetes\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760147 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-run-systemd\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760200 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-systemd-units\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760163 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-systemd-units\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760195 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-var-lib-openvswitch\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760269 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-kubelet\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760319 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-log-socket\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760348 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-kubelet\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760351 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-cni-bin\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760378 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-log-socket\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760386 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-etc-openvswitch\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760395 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-run-ovn-kubernetes\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760419 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-etc-openvswitch\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760438 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760455 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-node-log\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760477 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-ovnkube-script-lib\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760492 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-env-overrides\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760518 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-slash\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760537 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-run-ovn\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760539 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-node-log\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760553 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-run-openvswitch\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760570 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-run-netns\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760580 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760585 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ktq4\" (UniqueName: \"kubernetes.io/projected/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-kube-api-access-5ktq4\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760618 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-cni-netd\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760733 4851 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760752 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzppz\" (UniqueName: \"kubernetes.io/projected/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-kube-api-access-wzppz\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760767 4851 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c1929e0-6878-4572-b6d1-3a6dd8e2c291-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760811 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-cni-netd\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760822 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-run-ovn\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760906 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-slash\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760924 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-run-openvswitch\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.760991 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-host-run-netns\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.761483 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-env-overrides\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.761500 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-ovnkube-script-lib\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.761614 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-ovnkube-config\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.763549 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-ovn-node-metrics-cert\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.777893 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ktq4\" (UniqueName: \"kubernetes.io/projected/d1dafbf6-fe45-4b42-8aea-e64a3343e07a-kube-api-access-5ktq4\") pod \"ovnkube-node-g77w4\" (UID: \"d1dafbf6-fe45-4b42-8aea-e64a3343e07a\") " pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: I0223 13:20:34.889880 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:34 crc kubenswrapper[4851]: W0223 13:20:34.908004 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1dafbf6_fe45_4b42_8aea_e64a3343e07a.slice/crio-a9b594cc00fd6e33c39b9623573e6abeb689decad006849e64cb10c2dbd3c874 WatchSource:0}: Error finding container a9b594cc00fd6e33c39b9623573e6abeb689decad006849e64cb10c2dbd3c874: Status 404 returned error can't find the container with id a9b594cc00fd6e33c39b9623573e6abeb689decad006849e64cb10c2dbd3c874 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.024158 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"6138cb976447419043026e0fab0170858a1d1adb6782a96d3fcecd1f0eba5e41"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.024215 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"a9b594cc00fd6e33c39b9623573e6abeb689decad006849e64cb10c2dbd3c874"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.026224 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/2.log" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.027843 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/1.log" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.027881 4851 generic.go:334] "Generic (PLEG): container finished" podID="d14644c4-9d6f-4a06-bc4a-85795d4be4cd" containerID="0026076e95f0c7e84d940cc73c6f26c87c1b130819fbb330b48e9a5d5b82c6a5" exitCode=2 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.027938 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7cvl" event={"ID":"d14644c4-9d6f-4a06-bc4a-85795d4be4cd","Type":"ContainerDied","Data":"0026076e95f0c7e84d940cc73c6f26c87c1b130819fbb330b48e9a5d5b82c6a5"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.027966 4851 scope.go:117] "RemoveContainer" containerID="f23e3112452e76d2708be5f07b2c788533677d8137785411dba75d1a469195d3" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.028377 4851 scope.go:117] "RemoveContainer" containerID="0026076e95f0c7e84d940cc73c6f26c87c1b130819fbb330b48e9a5d5b82c6a5" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.028557 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-t7cvl_openshift-multus(d14644c4-9d6f-4a06-bc4a-85795d4be4cd)\"" pod="openshift-multus/multus-t7cvl" podUID="d14644c4-9d6f-4a06-bc4a-85795d4be4cd" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.031320 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovnkube-controller/3.log" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.033984 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovn-acl-logging/0.log" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.034546 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n9df6_4c1929e0-6878-4572-b6d1-3a6dd8e2c291/ovn-controller/0.log" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035090 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2" exitCode=0 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035112 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c" exitCode=0 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035119 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd" exitCode=0 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035127 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4" exitCode=0 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035135 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896" exitCode=0 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035141 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5" exitCode=0 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035149 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7" exitCode=143 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035159 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035226 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035243 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035175 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035257 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035373 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035393 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035407 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035421 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035189 4851 generic.go:334] "Generic (PLEG): container finished" podID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" containerID="8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983" exitCode=143 Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035427 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035451 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035459 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035467 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035474 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035481 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035488 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035495 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035506 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035517 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035526 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035533 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035540 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035547 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035554 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035561 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035567 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035575 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035581 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035590 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035601 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035609 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035617 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035623 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035630 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035637 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035644 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035650 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035656 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035662 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035670 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n9df6" event={"ID":"4c1929e0-6878-4572-b6d1-3a6dd8e2c291","Type":"ContainerDied","Data":"becab532ab832248a48accec5d719c69386c3939692a2725a6c04baa51a1306c"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035680 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035687 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035694 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035700 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035706 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035714 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035720 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035727 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035733 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.035741 4851 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1"} Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.084159 4851 scope.go:117] "RemoveContainer" containerID="28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.115488 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n9df6"] Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.120560 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.125708 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n9df6"] Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.141825 4851 scope.go:117] "RemoveContainer" containerID="debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.154649 4851 scope.go:117] "RemoveContainer" containerID="6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.175812 4851 scope.go:117] "RemoveContainer" containerID="17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.189631 4851 scope.go:117] "RemoveContainer" containerID="2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.206173 4851 scope.go:117] "RemoveContainer" containerID="44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.218427 4851 scope.go:117] "RemoveContainer" containerID="44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.238595 4851 scope.go:117] "RemoveContainer" containerID="8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.265732 4851 scope.go:117] "RemoveContainer" containerID="3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.282310 4851 scope.go:117] "RemoveContainer" containerID="28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.282791 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2\": container with ID starting with 28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2 not found: ID does not exist" containerID="28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.282827 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} err="failed to get container status \"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2\": rpc error: code = NotFound desc = could not find container \"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2\": container with ID starting with 28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.282854 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.283189 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\": container with ID starting with 63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92 not found: ID does not exist" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.283221 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} err="failed to get container status \"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\": rpc error: code = NotFound desc = could not find container \"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\": container with ID starting with 63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.283234 4851 scope.go:117] "RemoveContainer" containerID="debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.283474 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\": container with ID starting with debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c not found: ID does not exist" containerID="debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.283488 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} err="failed to get container status \"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\": rpc error: code = NotFound desc = could not find container \"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\": container with ID starting with debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.283500 4851 scope.go:117] "RemoveContainer" containerID="6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.283736 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\": container with ID starting with 6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd not found: ID does not exist" containerID="6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.283776 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} err="failed to get container status \"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\": rpc error: code = NotFound desc = could not find container \"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\": container with ID starting with 6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.283805 4851 scope.go:117] "RemoveContainer" containerID="17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.284041 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\": container with ID starting with 17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4 not found: ID does not exist" containerID="17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.284065 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} err="failed to get container status \"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\": rpc error: code = NotFound desc = could not find container \"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\": container with ID starting with 17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.284080 4851 scope.go:117] "RemoveContainer" containerID="2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.284286 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\": container with ID starting with 2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896 not found: ID does not exist" containerID="2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.284319 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} err="failed to get container status \"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\": rpc error: code = NotFound desc = could not find container \"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\": container with ID starting with 2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.284357 4851 scope.go:117] "RemoveContainer" containerID="44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.284602 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\": container with ID starting with 44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5 not found: ID does not exist" containerID="44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.284630 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} err="failed to get container status \"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\": rpc error: code = NotFound desc = could not find container \"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\": container with ID starting with 44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.284645 4851 scope.go:117] "RemoveContainer" containerID="44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.284838 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\": container with ID starting with 44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7 not found: ID does not exist" containerID="44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.284869 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} err="failed to get container status \"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\": rpc error: code = NotFound desc = could not find container \"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\": container with ID starting with 44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.284889 4851 scope.go:117] "RemoveContainer" containerID="8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.285129 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\": container with ID starting with 8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983 not found: ID does not exist" containerID="8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.285158 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} err="failed to get container status \"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\": rpc error: code = NotFound desc = could not find container \"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\": container with ID starting with 8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.285175 4851 scope.go:117] "RemoveContainer" containerID="3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1" Feb 23 13:20:35 crc kubenswrapper[4851]: E0223 13:20:35.285411 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\": container with ID starting with 3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1 not found: ID does not exist" containerID="3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.285435 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1"} err="failed to get container status \"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\": rpc error: code = NotFound desc = could not find container \"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\": container with ID starting with 3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.285448 4851 scope.go:117] "RemoveContainer" containerID="28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.285658 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} err="failed to get container status \"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2\": rpc error: code = NotFound desc = could not find container \"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2\": container with ID starting with 28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.285685 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.285898 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} err="failed to get container status \"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\": rpc error: code = NotFound desc = could not find container \"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\": container with ID starting with 63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.285918 4851 scope.go:117] "RemoveContainer" containerID="debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.286161 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} err="failed to get container status \"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\": rpc error: code = NotFound desc = could not find container \"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\": container with ID starting with debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.286188 4851 scope.go:117] "RemoveContainer" containerID="6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.286547 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} err="failed to get container status \"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\": rpc error: code = NotFound desc = could not find container \"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\": container with ID starting with 6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.286573 4851 scope.go:117] "RemoveContainer" containerID="17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.286951 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} err="failed to get container status \"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\": rpc error: code = NotFound desc = could not find container \"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\": container with ID starting with 17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.286972 4851 scope.go:117] "RemoveContainer" containerID="2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.287598 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} err="failed to get container status \"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\": rpc error: code = NotFound desc = could not find container \"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\": container with ID starting with 2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.287630 4851 scope.go:117] "RemoveContainer" containerID="44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.287956 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} err="failed to get container status \"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\": rpc error: code = NotFound desc = could not find container \"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\": container with ID starting with 44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.287979 4851 scope.go:117] "RemoveContainer" containerID="44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.288197 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} err="failed to get container status \"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\": rpc error: code = NotFound desc = could not find container \"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\": container with ID starting with 44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.288218 4851 scope.go:117] "RemoveContainer" containerID="8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.288486 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} err="failed to get container status \"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\": rpc error: code = NotFound desc = could not find container \"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\": container with ID starting with 8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.288504 4851 scope.go:117] "RemoveContainer" containerID="3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.288721 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1"} err="failed to get container status \"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\": rpc error: code = NotFound desc = could not find container \"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\": container with ID starting with 3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.288766 4851 scope.go:117] "RemoveContainer" containerID="28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289000 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} err="failed to get container status \"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2\": rpc error: code = NotFound desc = could not find container \"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2\": container with ID starting with 28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289015 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289193 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} err="failed to get container status \"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\": rpc error: code = NotFound desc = could not find container \"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\": container with ID starting with 63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289213 4851 scope.go:117] "RemoveContainer" containerID="debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289420 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} err="failed to get container status \"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\": rpc error: code = NotFound desc = could not find container \"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\": container with ID starting with debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289434 4851 scope.go:117] "RemoveContainer" containerID="6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289678 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} err="failed to get container status \"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\": rpc error: code = NotFound desc = could not find container \"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\": container with ID starting with 6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289704 4851 scope.go:117] "RemoveContainer" containerID="17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289897 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} err="failed to get container status \"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\": rpc error: code = NotFound desc = could not find container \"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\": container with ID starting with 17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.289927 4851 scope.go:117] "RemoveContainer" containerID="2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.290166 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} err="failed to get container status \"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\": rpc error: code = NotFound desc = could not find container \"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\": container with ID starting with 2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.290215 4851 scope.go:117] "RemoveContainer" containerID="44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.290436 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} err="failed to get container status \"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\": rpc error: code = NotFound desc = could not find container \"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\": container with ID starting with 44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.290465 4851 scope.go:117] "RemoveContainer" containerID="44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.290674 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} err="failed to get container status \"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\": rpc error: code = NotFound desc = could not find container \"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\": container with ID starting with 44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.290695 4851 scope.go:117] "RemoveContainer" containerID="8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.290984 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} err="failed to get container status \"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\": rpc error: code = NotFound desc = could not find container \"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\": container with ID starting with 8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.291017 4851 scope.go:117] "RemoveContainer" containerID="3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.291290 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1"} err="failed to get container status \"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\": rpc error: code = NotFound desc = could not find container \"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\": container with ID starting with 3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.291309 4851 scope.go:117] "RemoveContainer" containerID="28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.294416 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2"} err="failed to get container status \"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2\": rpc error: code = NotFound desc = could not find container \"28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2\": container with ID starting with 28e33c1832e0e5f3d602025485408a687e960654d672a550cea85a9bbc4920f2 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.294439 4851 scope.go:117] "RemoveContainer" containerID="63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.295538 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92"} err="failed to get container status \"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\": rpc error: code = NotFound desc = could not find container \"63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92\": container with ID starting with 63e1509959bd5095928fea4879795f83d38e9ee5f9f8798508481bb6415b5f92 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.295577 4851 scope.go:117] "RemoveContainer" containerID="debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.296437 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c"} err="failed to get container status \"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\": rpc error: code = NotFound desc = could not find container \"debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c\": container with ID starting with debdd879109d1b8a91e3f24e7940b3faaa5c355631fa1426a82e8e897b042f7c not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.296468 4851 scope.go:117] "RemoveContainer" containerID="6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.296972 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd"} err="failed to get container status \"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\": rpc error: code = NotFound desc = could not find container \"6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd\": container with ID starting with 6e322546ea3b144398c3c9ea0613b63ea6eecacfa50ca4e23dba980424e2eafd not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.297000 4851 scope.go:117] "RemoveContainer" containerID="17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.297406 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4"} err="failed to get container status \"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\": rpc error: code = NotFound desc = could not find container \"17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4\": container with ID starting with 17a8064957481dab89cc074637f8297f9066baf41669024809d1dbd0876cebd4 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.297423 4851 scope.go:117] "RemoveContainer" containerID="2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.297648 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896"} err="failed to get container status \"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\": rpc error: code = NotFound desc = could not find container \"2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896\": container with ID starting with 2413fc26f654f84566a7c310058c714f0a62475a5e98b5347b41e44610d24896 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.297665 4851 scope.go:117] "RemoveContainer" containerID="44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.298455 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5"} err="failed to get container status \"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\": rpc error: code = NotFound desc = could not find container \"44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5\": container with ID starting with 44269756611e3a7d2c40491ede5b7a468144483b95dcfa37dd83f0af06b47bc5 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.298481 4851 scope.go:117] "RemoveContainer" containerID="44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.299419 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7"} err="failed to get container status \"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\": rpc error: code = NotFound desc = could not find container \"44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7\": container with ID starting with 44734b874287f5a88546fc7ba5da2039dbc4bafde9f727697d431a6d56e573a7 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.299442 4851 scope.go:117] "RemoveContainer" containerID="8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.299793 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983"} err="failed to get container status \"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\": rpc error: code = NotFound desc = could not find container \"8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983\": container with ID starting with 8580fe5ba21a36a8d43e581db96ce17061736e52a9c43397022eaa2ec11b4983 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.299841 4851 scope.go:117] "RemoveContainer" containerID="3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.300370 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1"} err="failed to get container status \"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\": rpc error: code = NotFound desc = could not find container \"3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1\": container with ID starting with 3faf6b900e9df99a29c52f630993571e8f9ec8511a2ee116e5f148b4fa13b5d1 not found: ID does not exist" Feb 23 13:20:35 crc kubenswrapper[4851]: I0223 13:20:35.978141 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c1929e0-6878-4572-b6d1-3a6dd8e2c291" path="/var/lib/kubelet/pods/4c1929e0-6878-4572-b6d1-3a6dd8e2c291/volumes" Feb 23 13:20:36 crc kubenswrapper[4851]: I0223 13:20:36.047793 4851 generic.go:334] "Generic (PLEG): container finished" podID="d1dafbf6-fe45-4b42-8aea-e64a3343e07a" containerID="6138cb976447419043026e0fab0170858a1d1adb6782a96d3fcecd1f0eba5e41" exitCode=0 Feb 23 13:20:36 crc kubenswrapper[4851]: I0223 13:20:36.047835 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerDied","Data":"6138cb976447419043026e0fab0170858a1d1adb6782a96d3fcecd1f0eba5e41"} Feb 23 13:20:36 crc kubenswrapper[4851]: I0223 13:20:36.047878 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"b6cfa01bcb0445023f7100153e0244a4c50d8880a7c42660087f5b834aab4603"} Feb 23 13:20:36 crc kubenswrapper[4851]: I0223 13:20:36.047889 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"4f6477cfc9aad010fca805dfae0ebf0b512d2827b4276cdaff2947b53f524043"} Feb 23 13:20:36 crc kubenswrapper[4851]: I0223 13:20:36.047899 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"c812a1dfe5ea84998a082dae965ca1af0787df0c61c426d1a00eb03ce1ae84db"} Feb 23 13:20:36 crc kubenswrapper[4851]: I0223 13:20:36.047909 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"6fc071d30d7eaf520ffe75b9fa3aff930b48d66546eb76478a01d36413cfdb28"} Feb 23 13:20:36 crc kubenswrapper[4851]: I0223 13:20:36.047917 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"bd64d5da07b2ac6a5948fafaae0a6cbaf6c1c146aead9164e921f6dd3fed6b64"} Feb 23 13:20:36 crc kubenswrapper[4851]: I0223 13:20:36.047928 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"0e77fb6240b54bd77adb2c60aa6d09f21c1c018baf40e88e5659b8df3697eebc"} Feb 23 13:20:36 crc kubenswrapper[4851]: I0223 13:20:36.049881 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/2.log" Feb 23 13:20:38 crc kubenswrapper[4851]: I0223 13:20:38.067826 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"77924bfec05e649281134c790171d475858b5fa6a03ad781ba4314332987b582"} Feb 23 13:20:41 crc kubenswrapper[4851]: I0223 13:20:41.096436 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" event={"ID":"d1dafbf6-fe45-4b42-8aea-e64a3343e07a","Type":"ContainerStarted","Data":"4522a2646a030d74d7381ea040ddfb9420c4b74aa86a5f2e2d0388ff31b3f918"} Feb 23 13:20:41 crc kubenswrapper[4851]: I0223 13:20:41.096996 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:41 crc kubenswrapper[4851]: I0223 13:20:41.097012 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:41 crc kubenswrapper[4851]: I0223 13:20:41.126513 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:41 crc kubenswrapper[4851]: I0223 13:20:41.138840 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" podStartSLOduration=7.138824047 podStartE2EDuration="7.138824047s" podCreationTimestamp="2026-02-23 13:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:20:41.136209083 +0000 UTC m=+795.817912771" watchObservedRunningTime="2026-02-23 13:20:41.138824047 +0000 UTC m=+795.820527725" Feb 23 13:20:42 crc kubenswrapper[4851]: I0223 13:20:42.103188 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:42 crc kubenswrapper[4851]: I0223 13:20:42.182716 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:20:49 crc kubenswrapper[4851]: I0223 13:20:49.969114 4851 scope.go:117] "RemoveContainer" containerID="0026076e95f0c7e84d940cc73c6f26c87c1b130819fbb330b48e9a5d5b82c6a5" Feb 23 13:20:49 crc kubenswrapper[4851]: E0223 13:20:49.970430 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-t7cvl_openshift-multus(d14644c4-9d6f-4a06-bc4a-85795d4be4cd)\"" pod="openshift-multus/multus-t7cvl" podUID="d14644c4-9d6f-4a06-bc4a-85795d4be4cd" Feb 23 13:21:00 crc kubenswrapper[4851]: I0223 13:21:00.969062 4851 scope.go:117] "RemoveContainer" containerID="0026076e95f0c7e84d940cc73c6f26c87c1b130819fbb330b48e9a5d5b82c6a5" Feb 23 13:21:01 crc kubenswrapper[4851]: I0223 13:21:01.241859 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7cvl_d14644c4-9d6f-4a06-bc4a-85795d4be4cd/kube-multus/2.log" Feb 23 13:21:01 crc kubenswrapper[4851]: I0223 13:21:01.241965 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7cvl" event={"ID":"d14644c4-9d6f-4a06-bc4a-85795d4be4cd","Type":"ContainerStarted","Data":"0c30fb458453c91f5b233f8b162e780ce01980e1ab26cd4def01097e3edae7f4"} Feb 23 13:21:04 crc kubenswrapper[4851]: I0223 13:21:04.914541 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g77w4" Feb 23 13:21:13 crc kubenswrapper[4851]: I0223 13:21:13.849991 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv"] Feb 23 13:21:13 crc kubenswrapper[4851]: I0223 13:21:13.851860 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:13 crc kubenswrapper[4851]: I0223 13:21:13.854422 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 13:21:13 crc kubenswrapper[4851]: I0223 13:21:13.862724 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv"] Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.017934 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.018079 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.018198 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqchs\" (UniqueName: \"kubernetes.io/projected/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-kube-api-access-hqchs\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.119091 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.119142 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.119182 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqchs\" (UniqueName: \"kubernetes.io/projected/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-kube-api-access-hqchs\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.119659 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.119809 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.143775 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqchs\" (UniqueName: \"kubernetes.io/projected/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-kube-api-access-hqchs\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.175909 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:14 crc kubenswrapper[4851]: I0223 13:21:14.436168 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv"] Feb 23 13:21:14 crc kubenswrapper[4851]: W0223 13:21:14.443290 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79afbe6e_8ae5_4f33_b520_6f24ba3f44b2.slice/crio-4efb855def9ed609eb0973f33ea09d94b6ae7a9776e19ad939d000786037d2a0 WatchSource:0}: Error finding container 4efb855def9ed609eb0973f33ea09d94b6ae7a9776e19ad939d000786037d2a0: Status 404 returned error can't find the container with id 4efb855def9ed609eb0973f33ea09d94b6ae7a9776e19ad939d000786037d2a0 Feb 23 13:21:15 crc kubenswrapper[4851]: I0223 13:21:15.337753 4851 generic.go:334] "Generic (PLEG): container finished" podID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerID="6a5a97f7e2d202a7341cef08170e82987deccca7e21fdaffc792c9bbf8a90968" exitCode=0 Feb 23 13:21:15 crc kubenswrapper[4851]: I0223 13:21:15.337826 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" event={"ID":"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2","Type":"ContainerDied","Data":"6a5a97f7e2d202a7341cef08170e82987deccca7e21fdaffc792c9bbf8a90968"} Feb 23 13:21:15 crc kubenswrapper[4851]: I0223 13:21:15.337919 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" event={"ID":"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2","Type":"ContainerStarted","Data":"4efb855def9ed609eb0973f33ea09d94b6ae7a9776e19ad939d000786037d2a0"} Feb 23 13:21:17 crc kubenswrapper[4851]: I0223 13:21:17.350488 4851 generic.go:334] "Generic (PLEG): container finished" podID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerID="829e69ea2d0af8485b43963300e917e24553d7efeb60a9b58dd1e70897fbbc99" exitCode=0 Feb 23 13:21:17 crc kubenswrapper[4851]: I0223 13:21:17.350908 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" event={"ID":"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2","Type":"ContainerDied","Data":"829e69ea2d0af8485b43963300e917e24553d7efeb60a9b58dd1e70897fbbc99"} Feb 23 13:21:18 crc kubenswrapper[4851]: I0223 13:21:18.358925 4851 generic.go:334] "Generic (PLEG): container finished" podID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerID="f81228282962e617f1ea353b56fbf9cd621fd405c6f7956596ac5e4fd09c66fa" exitCode=0 Feb 23 13:21:18 crc kubenswrapper[4851]: I0223 13:21:18.358966 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" event={"ID":"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2","Type":"ContainerDied","Data":"f81228282962e617f1ea353b56fbf9cd621fd405c6f7956596ac5e4fd09c66fa"} Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.624213 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.699164 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-bundle\") pod \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.699861 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-bundle" (OuterVolumeSpecName: "bundle") pod "79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" (UID: "79afbe6e-8ae5-4f33-b520-6f24ba3f44b2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.799864 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqchs\" (UniqueName: \"kubernetes.io/projected/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-kube-api-access-hqchs\") pod \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.799928 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-util\") pod \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\" (UID: \"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2\") " Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.800228 4851 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.805688 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-kube-api-access-hqchs" (OuterVolumeSpecName: "kube-api-access-hqchs") pod "79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" (UID: "79afbe6e-8ae5-4f33-b520-6f24ba3f44b2"). InnerVolumeSpecName "kube-api-access-hqchs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.817472 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-util" (OuterVolumeSpecName: "util") pod "79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" (UID: "79afbe6e-8ae5-4f33-b520-6f24ba3f44b2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.901694 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqchs\" (UniqueName: \"kubernetes.io/projected/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-kube-api-access-hqchs\") on node \"crc\" DevicePath \"\"" Feb 23 13:21:19 crc kubenswrapper[4851]: I0223 13:21:19.901732 4851 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79afbe6e-8ae5-4f33-b520-6f24ba3f44b2-util\") on node \"crc\" DevicePath \"\"" Feb 23 13:21:20 crc kubenswrapper[4851]: I0223 13:21:20.373643 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" event={"ID":"79afbe6e-8ae5-4f33-b520-6f24ba3f44b2","Type":"ContainerDied","Data":"4efb855def9ed609eb0973f33ea09d94b6ae7a9776e19ad939d000786037d2a0"} Feb 23 13:21:20 crc kubenswrapper[4851]: I0223 13:21:20.373689 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4efb855def9ed609eb0973f33ea09d94b6ae7a9776e19ad939d000786037d2a0" Feb 23 13:21:20 crc kubenswrapper[4851]: I0223 13:21:20.373708 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.005435 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-4k8hs"] Feb 23 13:21:22 crc kubenswrapper[4851]: E0223 13:21:22.005956 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerName="util" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.005968 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerName="util" Feb 23 13:21:22 crc kubenswrapper[4851]: E0223 13:21:22.005987 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerName="pull" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.005993 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerName="pull" Feb 23 13:21:22 crc kubenswrapper[4851]: E0223 13:21:22.006002 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerName="extract" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.006008 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerName="extract" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.006088 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="79afbe6e-8ae5-4f33-b520-6f24ba3f44b2" containerName="extract" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.006436 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-4k8hs" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.010501 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.010557 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.010801 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8gnks" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.027930 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbb7r\" (UniqueName: \"kubernetes.io/projected/238cc1b8-1f38-43fa-92ca-bf3561e793fd-kube-api-access-xbb7r\") pod \"nmstate-operator-694c9596b7-4k8hs\" (UID: \"238cc1b8-1f38-43fa-92ca-bf3561e793fd\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-4k8hs" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.054560 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-4k8hs"] Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.129429 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbb7r\" (UniqueName: \"kubernetes.io/projected/238cc1b8-1f38-43fa-92ca-bf3561e793fd-kube-api-access-xbb7r\") pod \"nmstate-operator-694c9596b7-4k8hs\" (UID: \"238cc1b8-1f38-43fa-92ca-bf3561e793fd\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-4k8hs" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.148232 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbb7r\" (UniqueName: \"kubernetes.io/projected/238cc1b8-1f38-43fa-92ca-bf3561e793fd-kube-api-access-xbb7r\") pod \"nmstate-operator-694c9596b7-4k8hs\" (UID: \"238cc1b8-1f38-43fa-92ca-bf3561e793fd\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-4k8hs" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.320797 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-4k8hs" Feb 23 13:21:22 crc kubenswrapper[4851]: I0223 13:21:22.609160 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-4k8hs"] Feb 23 13:21:22 crc kubenswrapper[4851]: W0223 13:21:22.614970 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod238cc1b8_1f38_43fa_92ca_bf3561e793fd.slice/crio-ef8c415f1e23de7602fd947ed340e2dbe099afce7345ce29c6ff69dc94e49eb3 WatchSource:0}: Error finding container ef8c415f1e23de7602fd947ed340e2dbe099afce7345ce29c6ff69dc94e49eb3: Status 404 returned error can't find the container with id ef8c415f1e23de7602fd947ed340e2dbe099afce7345ce29c6ff69dc94e49eb3 Feb 23 13:21:23 crc kubenswrapper[4851]: I0223 13:21:23.394708 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-4k8hs" event={"ID":"238cc1b8-1f38-43fa-92ca-bf3561e793fd","Type":"ContainerStarted","Data":"ef8c415f1e23de7602fd947ed340e2dbe099afce7345ce29c6ff69dc94e49eb3"} Feb 23 13:21:25 crc kubenswrapper[4851]: I0223 13:21:25.788429 4851 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 13:21:29 crc kubenswrapper[4851]: I0223 13:21:29.434452 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-4k8hs" event={"ID":"238cc1b8-1f38-43fa-92ca-bf3561e793fd","Type":"ContainerStarted","Data":"5ebc8c95f3fd18ecc32fdba134fdad0228aa9ec9df8775ece5a6331a83460ea3"} Feb 23 13:21:29 crc kubenswrapper[4851]: I0223 13:21:29.451284 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-4k8hs" podStartSLOduration=2.271060893 podStartE2EDuration="8.451268839s" podCreationTimestamp="2026-02-23 13:21:21 +0000 UTC" firstStartedPulling="2026-02-23 13:21:22.618049766 +0000 UTC m=+837.299753444" lastFinishedPulling="2026-02-23 13:21:28.798257702 +0000 UTC m=+843.479961390" observedRunningTime="2026-02-23 13:21:29.448227983 +0000 UTC m=+844.129931661" watchObservedRunningTime="2026-02-23 13:21:29.451268839 +0000 UTC m=+844.132972517" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.415295 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx"] Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.416401 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.420872 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pzcjf" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.425925 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m"] Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.441662 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx"] Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.441690 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-gdkg9"] Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.441685 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6bp\" (UniqueName: \"kubernetes.io/projected/c4210a5b-8df0-4ccc-9811-5a2a831c2fa1-kube-api-access-7g6bp\") pod \"nmstate-metrics-58c85c668d-jmkxx\" (UID: \"c4210a5b-8df0-4ccc-9811-5a2a831c2fa1\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.442181 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.442606 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.447125 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.451113 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m"] Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.542086 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6bp\" (UniqueName: \"kubernetes.io/projected/c4210a5b-8df0-4ccc-9811-5a2a831c2fa1-kube-api-access-7g6bp\") pod \"nmstate-metrics-58c85c668d-jmkxx\" (UID: \"c4210a5b-8df0-4ccc-9811-5a2a831c2fa1\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.542145 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/730c196a-16c5-4564-a5e1-db3f9fdd31d7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mf27m\" (UID: \"730c196a-16c5-4564-a5e1-db3f9fdd31d7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.542171 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/375d9b3b-340d-4b74-b352-74ac68607ad8-ovs-socket\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.542189 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/375d9b3b-340d-4b74-b352-74ac68607ad8-dbus-socket\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.542213 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g765h\" (UniqueName: \"kubernetes.io/projected/375d9b3b-340d-4b74-b352-74ac68607ad8-kube-api-access-g765h\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.542485 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wd2c\" (UniqueName: \"kubernetes.io/projected/730c196a-16c5-4564-a5e1-db3f9fdd31d7-kube-api-access-4wd2c\") pod \"nmstate-webhook-866bcb46dc-mf27m\" (UID: \"730c196a-16c5-4564-a5e1-db3f9fdd31d7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.546028 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/375d9b3b-340d-4b74-b352-74ac68607ad8-nmstate-lock\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.547886 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9"] Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.548565 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.551010 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.551291 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.551432 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ncwvv" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.556931 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9"] Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.580032 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g6bp\" (UniqueName: \"kubernetes.io/projected/c4210a5b-8df0-4ccc-9811-5a2a831c2fa1-kube-api-access-7g6bp\") pod \"nmstate-metrics-58c85c668d-jmkxx\" (UID: \"c4210a5b-8df0-4ccc-9811-5a2a831c2fa1\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.646990 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/375d9b3b-340d-4b74-b352-74ac68607ad8-ovs-socket\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.647132 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/375d9b3b-340d-4b74-b352-74ac68607ad8-dbus-socket\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.647176 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cce85f53-7343-48af-8c40-275e87fbc140-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.647215 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g765h\" (UniqueName: \"kubernetes.io/projected/375d9b3b-340d-4b74-b352-74ac68607ad8-kube-api-access-g765h\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.647252 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9pbf\" (UniqueName: \"kubernetes.io/projected/cce85f53-7343-48af-8c40-275e87fbc140-kube-api-access-p9pbf\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.647278 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wd2c\" (UniqueName: \"kubernetes.io/projected/730c196a-16c5-4564-a5e1-db3f9fdd31d7-kube-api-access-4wd2c\") pod \"nmstate-webhook-866bcb46dc-mf27m\" (UID: \"730c196a-16c5-4564-a5e1-db3f9fdd31d7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.647303 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/375d9b3b-340d-4b74-b352-74ac68607ad8-nmstate-lock\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.647386 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/730c196a-16c5-4564-a5e1-db3f9fdd31d7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mf27m\" (UID: \"730c196a-16c5-4564-a5e1-db3f9fdd31d7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.647411 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cce85f53-7343-48af-8c40-275e87fbc140-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.647072 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/375d9b3b-340d-4b74-b352-74ac68607ad8-ovs-socket\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.648197 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/375d9b3b-340d-4b74-b352-74ac68607ad8-nmstate-lock\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.648899 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/375d9b3b-340d-4b74-b352-74ac68607ad8-dbus-socket\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.651892 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/730c196a-16c5-4564-a5e1-db3f9fdd31d7-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-mf27m\" (UID: \"730c196a-16c5-4564-a5e1-db3f9fdd31d7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.666959 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g765h\" (UniqueName: \"kubernetes.io/projected/375d9b3b-340d-4b74-b352-74ac68607ad8-kube-api-access-g765h\") pod \"nmstate-handler-gdkg9\" (UID: \"375d9b3b-340d-4b74-b352-74ac68607ad8\") " pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.669096 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wd2c\" (UniqueName: \"kubernetes.io/projected/730c196a-16c5-4564-a5e1-db3f9fdd31d7-kube-api-access-4wd2c\") pod \"nmstate-webhook-866bcb46dc-mf27m\" (UID: \"730c196a-16c5-4564-a5e1-db3f9fdd31d7\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.731739 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5969ccc7b6-25m7q"] Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.732406 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.739399 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748012 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cce85f53-7343-48af-8c40-275e87fbc140-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748063 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/125a74e2-f081-4b89-bab8-d733744cd5ee-console-serving-cert\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748096 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9pbf\" (UniqueName: \"kubernetes.io/projected/cce85f53-7343-48af-8c40-275e87fbc140-kube-api-access-p9pbf\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748119 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-trusted-ca-bundle\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748152 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-console-config\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748176 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-oauth-serving-cert\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748200 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/125a74e2-f081-4b89-bab8-d733744cd5ee-console-oauth-config\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748240 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-service-ca\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: E0223 13:21:30.748240 4851 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748268 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cce85f53-7343-48af-8c40-275e87fbc140-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:30 crc kubenswrapper[4851]: E0223 13:21:30.748298 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cce85f53-7343-48af-8c40-275e87fbc140-plugin-serving-cert podName:cce85f53-7343-48af-8c40-275e87fbc140 nodeName:}" failed. No retries permitted until 2026-02-23 13:21:31.248279409 +0000 UTC m=+845.929983077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/cce85f53-7343-48af-8c40-275e87fbc140-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-qm7j9" (UID: "cce85f53-7343-48af-8c40-275e87fbc140") : secret "plugin-serving-cert" not found Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.748314 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xknxg\" (UniqueName: \"kubernetes.io/projected/125a74e2-f081-4b89-bab8-d733744cd5ee-kube-api-access-xknxg\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.749295 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cce85f53-7343-48af-8c40-275e87fbc140-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.760885 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.772453 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5969ccc7b6-25m7q"] Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.777727 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.806619 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9pbf\" (UniqueName: \"kubernetes.io/projected/cce85f53-7343-48af-8c40-275e87fbc140-kube-api-access-p9pbf\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.849668 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xknxg\" (UniqueName: \"kubernetes.io/projected/125a74e2-f081-4b89-bab8-d733744cd5ee-kube-api-access-xknxg\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.850041 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/125a74e2-f081-4b89-bab8-d733744cd5ee-console-serving-cert\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.850068 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-trusted-ca-bundle\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.850101 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-oauth-serving-cert\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.850124 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-console-config\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.850147 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/125a74e2-f081-4b89-bab8-d733744cd5ee-console-oauth-config\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.850176 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-service-ca\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.851248 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-oauth-serving-cert\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.851548 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-console-config\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.852451 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-trusted-ca-bundle\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.852512 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/125a74e2-f081-4b89-bab8-d733744cd5ee-service-ca\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.863019 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/125a74e2-f081-4b89-bab8-d733744cd5ee-console-serving-cert\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.863780 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/125a74e2-f081-4b89-bab8-d733744cd5ee-console-oauth-config\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.871145 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xknxg\" (UniqueName: \"kubernetes.io/projected/125a74e2-f081-4b89-bab8-d733744cd5ee-kube-api-access-xknxg\") pod \"console-5969ccc7b6-25m7q\" (UID: \"125a74e2-f081-4b89-bab8-d733744cd5ee\") " pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:30 crc kubenswrapper[4851]: I0223 13:21:30.979819 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx"] Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.048365 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.055264 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m"] Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.224448 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5969ccc7b6-25m7q"] Feb 23 13:21:31 crc kubenswrapper[4851]: W0223 13:21:31.229491 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod125a74e2_f081_4b89_bab8_d733744cd5ee.slice/crio-58dc71ab24b3a25f04f6e469c2ded1ef6da02ba1cefa2d18c684a1678afb16d0 WatchSource:0}: Error finding container 58dc71ab24b3a25f04f6e469c2ded1ef6da02ba1cefa2d18c684a1678afb16d0: Status 404 returned error can't find the container with id 58dc71ab24b3a25f04f6e469c2ded1ef6da02ba1cefa2d18c684a1678afb16d0 Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.255520 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cce85f53-7343-48af-8c40-275e87fbc140-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.260682 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cce85f53-7343-48af-8c40-275e87fbc140-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-qm7j9\" (UID: \"cce85f53-7343-48af-8c40-275e87fbc140\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.448058 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5969ccc7b6-25m7q" event={"ID":"125a74e2-f081-4b89-bab8-d733744cd5ee","Type":"ContainerStarted","Data":"d51d04f7e7104dee83219c7a61e4a9f0f7bd1f332c1f504a448a0f03490367b8"} Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.448097 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5969ccc7b6-25m7q" event={"ID":"125a74e2-f081-4b89-bab8-d733744cd5ee","Type":"ContainerStarted","Data":"58dc71ab24b3a25f04f6e469c2ded1ef6da02ba1cefa2d18c684a1678afb16d0"} Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.450254 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" event={"ID":"730c196a-16c5-4564-a5e1-db3f9fdd31d7","Type":"ContainerStarted","Data":"a36571c564aac2558e3bddfc5d19648c3c2ada452965497c926a33c9863f4be9"} Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.452126 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx" event={"ID":"c4210a5b-8df0-4ccc-9811-5a2a831c2fa1","Type":"ContainerStarted","Data":"c03598a416293c4230dffe08fca188525a1317a36defed9c69d994b788a53116"} Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.455088 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gdkg9" event={"ID":"375d9b3b-340d-4b74-b352-74ac68607ad8","Type":"ContainerStarted","Data":"39cade14448307158e7dcdcc8895a9b75413e9ffea855f8e07f3178bec860407"} Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.465905 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5969ccc7b6-25m7q" podStartSLOduration=1.465885277 podStartE2EDuration="1.465885277s" podCreationTimestamp="2026-02-23 13:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:21:31.465057283 +0000 UTC m=+846.146761001" watchObservedRunningTime="2026-02-23 13:21:31.465885277 +0000 UTC m=+846.147588955" Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.466411 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" Feb 23 13:21:31 crc kubenswrapper[4851]: I0223 13:21:31.670000 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9"] Feb 23 13:21:31 crc kubenswrapper[4851]: W0223 13:21:31.674559 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcce85f53_7343_48af_8c40_275e87fbc140.slice/crio-74e3206f7f2447d1fbc57f1c903f54ae0bc3f966c7be8eb3623074d6bb02595d WatchSource:0}: Error finding container 74e3206f7f2447d1fbc57f1c903f54ae0bc3f966c7be8eb3623074d6bb02595d: Status 404 returned error can't find the container with id 74e3206f7f2447d1fbc57f1c903f54ae0bc3f966c7be8eb3623074d6bb02595d Feb 23 13:21:32 crc kubenswrapper[4851]: I0223 13:21:32.463195 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" event={"ID":"cce85f53-7343-48af-8c40-275e87fbc140","Type":"ContainerStarted","Data":"74e3206f7f2447d1fbc57f1c903f54ae0bc3f966c7be8eb3623074d6bb02595d"} Feb 23 13:21:33 crc kubenswrapper[4851]: I0223 13:21:33.470286 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gdkg9" event={"ID":"375d9b3b-340d-4b74-b352-74ac68607ad8","Type":"ContainerStarted","Data":"98602e3aba0f0ae3ea187489608f9d96e13fe1263258a8027e541d50ade880d6"} Feb 23 13:21:33 crc kubenswrapper[4851]: I0223 13:21:33.470803 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:33 crc kubenswrapper[4851]: I0223 13:21:33.472765 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" event={"ID":"730c196a-16c5-4564-a5e1-db3f9fdd31d7","Type":"ContainerStarted","Data":"caa95c385f849d3cd6285bd8adf34ad761797510a23145e803556e2b41aec752"} Feb 23 13:21:33 crc kubenswrapper[4851]: I0223 13:21:33.473109 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:33 crc kubenswrapper[4851]: I0223 13:21:33.475584 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx" event={"ID":"c4210a5b-8df0-4ccc-9811-5a2a831c2fa1","Type":"ContainerStarted","Data":"4ae266716e42f48d1791c0d397fdb6344a7f80050d9c853f4c5bc73f4deee8ac"} Feb 23 13:21:33 crc kubenswrapper[4851]: I0223 13:21:33.511598 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" podStartSLOduration=1.5414824070000002 podStartE2EDuration="3.511576864s" podCreationTimestamp="2026-02-23 13:21:30 +0000 UTC" firstStartedPulling="2026-02-23 13:21:31.05966402 +0000 UTC m=+845.741367698" lastFinishedPulling="2026-02-23 13:21:33.029758477 +0000 UTC m=+847.711462155" observedRunningTime="2026-02-23 13:21:33.509906307 +0000 UTC m=+848.191609995" watchObservedRunningTime="2026-02-23 13:21:33.511576864 +0000 UTC m=+848.193280562" Feb 23 13:21:33 crc kubenswrapper[4851]: I0223 13:21:33.517217 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-gdkg9" podStartSLOduration=1.308194769 podStartE2EDuration="3.517180213s" podCreationTimestamp="2026-02-23 13:21:30 +0000 UTC" firstStartedPulling="2026-02-23 13:21:30.810815681 +0000 UTC m=+845.492519359" lastFinishedPulling="2026-02-23 13:21:33.019801125 +0000 UTC m=+847.701504803" observedRunningTime="2026-02-23 13:21:33.48989165 +0000 UTC m=+848.171595358" watchObservedRunningTime="2026-02-23 13:21:33.517180213 +0000 UTC m=+848.198883891" Feb 23 13:21:34 crc kubenswrapper[4851]: I0223 13:21:34.483581 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" event={"ID":"cce85f53-7343-48af-8c40-275e87fbc140","Type":"ContainerStarted","Data":"5c574e9e7ab80c7122b1eced42b41338a6962ae0445e88d6107e1d2cf3f011c4"} Feb 23 13:21:34 crc kubenswrapper[4851]: I0223 13:21:34.502485 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-qm7j9" podStartSLOduration=2.224006572 podStartE2EDuration="4.502469593s" podCreationTimestamp="2026-02-23 13:21:30 +0000 UTC" firstStartedPulling="2026-02-23 13:21:31.676673348 +0000 UTC m=+846.358377016" lastFinishedPulling="2026-02-23 13:21:33.955136359 +0000 UTC m=+848.636840037" observedRunningTime="2026-02-23 13:21:34.500900449 +0000 UTC m=+849.182604157" watchObservedRunningTime="2026-02-23 13:21:34.502469593 +0000 UTC m=+849.184173271" Feb 23 13:21:35 crc kubenswrapper[4851]: I0223 13:21:35.489862 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx" event={"ID":"c4210a5b-8df0-4ccc-9811-5a2a831c2fa1","Type":"ContainerStarted","Data":"84e33ccf8a502d63cddbe6ba3c2685b6da114de02d16d6a8420f89e5ab645761"} Feb 23 13:21:35 crc kubenswrapper[4851]: I0223 13:21:35.509638 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-jmkxx" podStartSLOduration=1.505014906 podStartE2EDuration="5.509604183s" podCreationTimestamp="2026-02-23 13:21:30 +0000 UTC" firstStartedPulling="2026-02-23 13:21:30.989605255 +0000 UTC m=+845.671308933" lastFinishedPulling="2026-02-23 13:21:34.994194532 +0000 UTC m=+849.675898210" observedRunningTime="2026-02-23 13:21:35.507969786 +0000 UTC m=+850.189673474" watchObservedRunningTime="2026-02-23 13:21:35.509604183 +0000 UTC m=+850.191307891" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.454510 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sqxj9"] Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.456071 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.467074 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sqxj9"] Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.649718 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndfpp\" (UniqueName: \"kubernetes.io/projected/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-kube-api-access-ndfpp\") pod \"redhat-operators-sqxj9\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.649790 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-utilities\") pod \"redhat-operators-sqxj9\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.650003 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-catalog-content\") pod \"redhat-operators-sqxj9\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.751562 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-catalog-content\") pod \"redhat-operators-sqxj9\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.751620 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndfpp\" (UniqueName: \"kubernetes.io/projected/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-kube-api-access-ndfpp\") pod \"redhat-operators-sqxj9\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.751650 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-utilities\") pod \"redhat-operators-sqxj9\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.752078 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-catalog-content\") pod \"redhat-operators-sqxj9\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.752104 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-utilities\") pod \"redhat-operators-sqxj9\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:38 crc kubenswrapper[4851]: I0223 13:21:38.775988 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndfpp\" (UniqueName: \"kubernetes.io/projected/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-kube-api-access-ndfpp\") pod \"redhat-operators-sqxj9\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:39 crc kubenswrapper[4851]: I0223 13:21:39.074801 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:39 crc kubenswrapper[4851]: I0223 13:21:39.271588 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sqxj9"] Feb 23 13:21:39 crc kubenswrapper[4851]: W0223 13:21:39.279571 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb2d1ca_1b21_4dad_8f70_75b2c7cf3069.slice/crio-1da99c53eb33020b62f536fbfdbcb779232952d230962a2e3d1538a79b3e7998 WatchSource:0}: Error finding container 1da99c53eb33020b62f536fbfdbcb779232952d230962a2e3d1538a79b3e7998: Status 404 returned error can't find the container with id 1da99c53eb33020b62f536fbfdbcb779232952d230962a2e3d1538a79b3e7998 Feb 23 13:21:39 crc kubenswrapper[4851]: I0223 13:21:39.511225 4851 generic.go:334] "Generic (PLEG): container finished" podID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerID="397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd" exitCode=0 Feb 23 13:21:39 crc kubenswrapper[4851]: I0223 13:21:39.511393 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqxj9" event={"ID":"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069","Type":"ContainerDied","Data":"397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd"} Feb 23 13:21:39 crc kubenswrapper[4851]: I0223 13:21:39.511640 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqxj9" event={"ID":"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069","Type":"ContainerStarted","Data":"1da99c53eb33020b62f536fbfdbcb779232952d230962a2e3d1538a79b3e7998"} Feb 23 13:21:40 crc kubenswrapper[4851]: I0223 13:21:40.520064 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqxj9" event={"ID":"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069","Type":"ContainerStarted","Data":"ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee"} Feb 23 13:21:40 crc kubenswrapper[4851]: I0223 13:21:40.789619 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gdkg9" Feb 23 13:21:41 crc kubenswrapper[4851]: I0223 13:21:41.049586 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:41 crc kubenswrapper[4851]: I0223 13:21:41.049934 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:41 crc kubenswrapper[4851]: I0223 13:21:41.053889 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:41 crc kubenswrapper[4851]: I0223 13:21:41.526776 4851 generic.go:334] "Generic (PLEG): container finished" podID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerID="ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee" exitCode=0 Feb 23 13:21:41 crc kubenswrapper[4851]: I0223 13:21:41.527520 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqxj9" event={"ID":"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069","Type":"ContainerDied","Data":"ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee"} Feb 23 13:21:41 crc kubenswrapper[4851]: I0223 13:21:41.534313 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5969ccc7b6-25m7q" Feb 23 13:21:41 crc kubenswrapper[4851]: I0223 13:21:41.598097 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x8scz"] Feb 23 13:21:42 crc kubenswrapper[4851]: I0223 13:21:42.541923 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqxj9" event={"ID":"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069","Type":"ContainerStarted","Data":"f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30"} Feb 23 13:21:42 crc kubenswrapper[4851]: I0223 13:21:42.576539 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sqxj9" podStartSLOduration=2.174855032 podStartE2EDuration="4.576502273s" podCreationTimestamp="2026-02-23 13:21:38 +0000 UTC" firstStartedPulling="2026-02-23 13:21:39.512781158 +0000 UTC m=+854.194484836" lastFinishedPulling="2026-02-23 13:21:41.914428389 +0000 UTC m=+856.596132077" observedRunningTime="2026-02-23 13:21:42.574752964 +0000 UTC m=+857.256456662" watchObservedRunningTime="2026-02-23 13:21:42.576502273 +0000 UTC m=+857.258205961" Feb 23 13:21:49 crc kubenswrapper[4851]: I0223 13:21:49.076017 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:49 crc kubenswrapper[4851]: I0223 13:21:49.076583 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:49 crc kubenswrapper[4851]: I0223 13:21:49.142174 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:49 crc kubenswrapper[4851]: I0223 13:21:49.643217 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:49 crc kubenswrapper[4851]: I0223 13:21:49.696324 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sqxj9"] Feb 23 13:21:50 crc kubenswrapper[4851]: I0223 13:21:50.783997 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-mf27m" Feb 23 13:21:51 crc kubenswrapper[4851]: I0223 13:21:51.610596 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sqxj9" podUID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerName="registry-server" containerID="cri-o://f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30" gracePeriod=2 Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.533277 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.618670 4851 generic.go:334] "Generic (PLEG): container finished" podID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerID="f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30" exitCode=0 Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.618725 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqxj9" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.618717 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqxj9" event={"ID":"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069","Type":"ContainerDied","Data":"f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30"} Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.618872 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqxj9" event={"ID":"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069","Type":"ContainerDied","Data":"1da99c53eb33020b62f536fbfdbcb779232952d230962a2e3d1538a79b3e7998"} Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.618927 4851 scope.go:117] "RemoveContainer" containerID="f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.634808 4851 scope.go:117] "RemoveContainer" containerID="ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.652914 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-utilities\") pod \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.652972 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-catalog-content\") pod \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.653045 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndfpp\" (UniqueName: \"kubernetes.io/projected/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-kube-api-access-ndfpp\") pod \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\" (UID: \"4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069\") " Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.654116 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-utilities" (OuterVolumeSpecName: "utilities") pod "4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" (UID: "4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.658600 4851 scope.go:117] "RemoveContainer" containerID="397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.662076 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-kube-api-access-ndfpp" (OuterVolumeSpecName: "kube-api-access-ndfpp") pod "4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" (UID: "4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069"). InnerVolumeSpecName "kube-api-access-ndfpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.688070 4851 scope.go:117] "RemoveContainer" containerID="f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30" Feb 23 13:21:52 crc kubenswrapper[4851]: E0223 13:21:52.688580 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30\": container with ID starting with f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30 not found: ID does not exist" containerID="f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.688620 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30"} err="failed to get container status \"f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30\": rpc error: code = NotFound desc = could not find container \"f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30\": container with ID starting with f9c61a6ffd46b8e60c56c8f75c583485690a5b3f9820a337b02e2c110df9fa30 not found: ID does not exist" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.688646 4851 scope.go:117] "RemoveContainer" containerID="ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee" Feb 23 13:21:52 crc kubenswrapper[4851]: E0223 13:21:52.689872 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee\": container with ID starting with ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee not found: ID does not exist" containerID="ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.689924 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee"} err="failed to get container status \"ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee\": rpc error: code = NotFound desc = could not find container \"ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee\": container with ID starting with ffe3896ee42fbb77b20512bea1c366bdf281d3bcc37d12837627f5626426caee not found: ID does not exist" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.689956 4851 scope.go:117] "RemoveContainer" containerID="397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd" Feb 23 13:21:52 crc kubenswrapper[4851]: E0223 13:21:52.690264 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd\": container with ID starting with 397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd not found: ID does not exist" containerID="397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.690316 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd"} err="failed to get container status \"397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd\": rpc error: code = NotFound desc = could not find container \"397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd\": container with ID starting with 397926b5de4d63189de48c8e8a72a912545194fd848e711aeb424e618c36d7dd not found: ID does not exist" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.754677 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndfpp\" (UniqueName: \"kubernetes.io/projected/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-kube-api-access-ndfpp\") on node \"crc\" DevicePath \"\"" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.754710 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.776791 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" (UID: "4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.855801 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.949896 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sqxj9"] Feb 23 13:21:52 crc kubenswrapper[4851]: I0223 13:21:52.958112 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sqxj9"] Feb 23 13:21:53 crc kubenswrapper[4851]: I0223 13:21:53.983600 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" path="/var/lib/kubelet/pods/4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069/volumes" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.418031 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j"] Feb 23 13:22:03 crc kubenswrapper[4851]: E0223 13:22:03.418839 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerName="extract-utilities" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.418856 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerName="extract-utilities" Feb 23 13:22:03 crc kubenswrapper[4851]: E0223 13:22:03.418865 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerName="registry-server" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.418873 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerName="registry-server" Feb 23 13:22:03 crc kubenswrapper[4851]: E0223 13:22:03.418888 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerName="extract-content" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.418895 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerName="extract-content" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.419012 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb2d1ca-1b21-4dad-8f70-75b2c7cf3069" containerName="registry-server" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.419736 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.421713 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.436739 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j"] Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.507574 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.507894 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxvwl\" (UniqueName: \"kubernetes.io/projected/4f747063-8a9c-4fa9-8af3-4b832b22dd24-kube-api-access-sxvwl\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.508040 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.608546 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.608614 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxvwl\" (UniqueName: \"kubernetes.io/projected/4f747063-8a9c-4fa9-8af3-4b832b22dd24-kube-api-access-sxvwl\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.608662 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.609144 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.609190 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.625519 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxvwl\" (UniqueName: \"kubernetes.io/projected/4f747063-8a9c-4fa9-8af3-4b832b22dd24-kube-api-access-sxvwl\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.735900 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:03 crc kubenswrapper[4851]: I0223 13:22:03.953864 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j"] Feb 23 13:22:04 crc kubenswrapper[4851]: I0223 13:22:04.686240 4851 generic.go:334] "Generic (PLEG): container finished" podID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerID="4b0b7af0680ccf1bf02269d89c1526bb546ec186a48156e0cb5c1b9ff7128c41" exitCode=0 Feb 23 13:22:04 crc kubenswrapper[4851]: I0223 13:22:04.686308 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" event={"ID":"4f747063-8a9c-4fa9-8af3-4b832b22dd24","Type":"ContainerDied","Data":"4b0b7af0680ccf1bf02269d89c1526bb546ec186a48156e0cb5c1b9ff7128c41"} Feb 23 13:22:04 crc kubenswrapper[4851]: I0223 13:22:04.686666 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" event={"ID":"4f747063-8a9c-4fa9-8af3-4b832b22dd24","Type":"ContainerStarted","Data":"877d69e48d7162db17e98627348c855638261fe4411ff80f99993b79cfd53eca"} Feb 23 13:22:06 crc kubenswrapper[4851]: I0223 13:22:06.647884 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-x8scz" podUID="a6fe30bd-a140-4309-9156-52d361049059" containerName="console" containerID="cri-o://f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b" gracePeriod=15 Feb 23 13:22:06 crc kubenswrapper[4851]: I0223 13:22:06.698089 4851 generic.go:334] "Generic (PLEG): container finished" podID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerID="ed852211204951fe625b2da9e364630ed564ca6c6ca16368375f1e88591a51e7" exitCode=0 Feb 23 13:22:06 crc kubenswrapper[4851]: I0223 13:22:06.698130 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" event={"ID":"4f747063-8a9c-4fa9-8af3-4b832b22dd24","Type":"ContainerDied","Data":"ed852211204951fe625b2da9e364630ed564ca6c6ca16368375f1e88591a51e7"} Feb 23 13:22:06 crc kubenswrapper[4851]: I0223 13:22:06.929982 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f9x4j"] Feb 23 13:22:06 crc kubenswrapper[4851]: I0223 13:22:06.937301 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:06 crc kubenswrapper[4851]: I0223 13:22:06.947182 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9x4j"] Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.007423 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x8scz_a6fe30bd-a140-4309-9156-52d361049059/console/0.log" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.007500 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072061 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-service-ca\") pod \"a6fe30bd-a140-4309-9156-52d361049059\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072138 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-oauth-config\") pod \"a6fe30bd-a140-4309-9156-52d361049059\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072188 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-serving-cert\") pod \"a6fe30bd-a140-4309-9156-52d361049059\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072252 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8dmb\" (UniqueName: \"kubernetes.io/projected/a6fe30bd-a140-4309-9156-52d361049059-kube-api-access-t8dmb\") pod \"a6fe30bd-a140-4309-9156-52d361049059\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072307 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-console-config\") pod \"a6fe30bd-a140-4309-9156-52d361049059\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072364 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-trusted-ca-bundle\") pod \"a6fe30bd-a140-4309-9156-52d361049059\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072476 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-oauth-serving-cert\") pod \"a6fe30bd-a140-4309-9156-52d361049059\" (UID: \"a6fe30bd-a140-4309-9156-52d361049059\") " Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072678 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-utilities\") pod \"redhat-marketplace-f9x4j\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072714 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8mbr\" (UniqueName: \"kubernetes.io/projected/67ae49f4-e643-459d-934c-96aa760166b5-kube-api-access-k8mbr\") pod \"redhat-marketplace-f9x4j\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.072787 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-catalog-content\") pod \"redhat-marketplace-f9x4j\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.073535 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-service-ca" (OuterVolumeSpecName: "service-ca") pod "a6fe30bd-a140-4309-9156-52d361049059" (UID: "a6fe30bd-a140-4309-9156-52d361049059"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.073697 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-console-config" (OuterVolumeSpecName: "console-config") pod "a6fe30bd-a140-4309-9156-52d361049059" (UID: "a6fe30bd-a140-4309-9156-52d361049059"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.073912 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a6fe30bd-a140-4309-9156-52d361049059" (UID: "a6fe30bd-a140-4309-9156-52d361049059"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.074156 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a6fe30bd-a140-4309-9156-52d361049059" (UID: "a6fe30bd-a140-4309-9156-52d361049059"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.079248 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6fe30bd-a140-4309-9156-52d361049059-kube-api-access-t8dmb" (OuterVolumeSpecName: "kube-api-access-t8dmb") pod "a6fe30bd-a140-4309-9156-52d361049059" (UID: "a6fe30bd-a140-4309-9156-52d361049059"). InnerVolumeSpecName "kube-api-access-t8dmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.079463 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a6fe30bd-a140-4309-9156-52d361049059" (UID: "a6fe30bd-a140-4309-9156-52d361049059"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.080661 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a6fe30bd-a140-4309-9156-52d361049059" (UID: "a6fe30bd-a140-4309-9156-52d361049059"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.173846 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-utilities\") pod \"redhat-marketplace-f9x4j\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.173897 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8mbr\" (UniqueName: \"kubernetes.io/projected/67ae49f4-e643-459d-934c-96aa760166b5-kube-api-access-k8mbr\") pod \"redhat-marketplace-f9x4j\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.173944 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-catalog-content\") pod \"redhat-marketplace-f9x4j\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.173982 4851 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.173993 4851 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.174003 4851 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6fe30bd-a140-4309-9156-52d361049059-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.174012 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8dmb\" (UniqueName: \"kubernetes.io/projected/a6fe30bd-a140-4309-9156-52d361049059-kube-api-access-t8dmb\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.174020 4851 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.174028 4851 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.174036 4851 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6fe30bd-a140-4309-9156-52d361049059-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.174443 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-catalog-content\") pod \"redhat-marketplace-f9x4j\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.174679 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-utilities\") pod \"redhat-marketplace-f9x4j\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.190189 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8mbr\" (UniqueName: \"kubernetes.io/projected/67ae49f4-e643-459d-934c-96aa760166b5-kube-api-access-k8mbr\") pod \"redhat-marketplace-f9x4j\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.262714 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.462261 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9x4j"] Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.705653 4851 generic.go:334] "Generic (PLEG): container finished" podID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerID="b0a8717023bf8c170ef71c2b00bce6d5f42ad81b72451b82961e94dec5251040" exitCode=0 Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.705717 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" event={"ID":"4f747063-8a9c-4fa9-8af3-4b832b22dd24","Type":"ContainerDied","Data":"b0a8717023bf8c170ef71c2b00bce6d5f42ad81b72451b82961e94dec5251040"} Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.706655 4851 generic.go:334] "Generic (PLEG): container finished" podID="67ae49f4-e643-459d-934c-96aa760166b5" containerID="d82ad8ae22297fe1edc67596234d6ca9b6d0e39aef80bbf0e3384c7ac5252ba4" exitCode=0 Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.706687 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9x4j" event={"ID":"67ae49f4-e643-459d-934c-96aa760166b5","Type":"ContainerDied","Data":"d82ad8ae22297fe1edc67596234d6ca9b6d0e39aef80bbf0e3384c7ac5252ba4"} Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.706701 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9x4j" event={"ID":"67ae49f4-e643-459d-934c-96aa760166b5","Type":"ContainerStarted","Data":"c71d33dc231c39f04f85fc6a4eff6d7155565c4e7b2099e2318d86689dfe498d"} Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.708639 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x8scz_a6fe30bd-a140-4309-9156-52d361049059/console/0.log" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.708669 4851 generic.go:334] "Generic (PLEG): container finished" podID="a6fe30bd-a140-4309-9156-52d361049059" containerID="f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b" exitCode=2 Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.708686 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x8scz" event={"ID":"a6fe30bd-a140-4309-9156-52d361049059","Type":"ContainerDied","Data":"f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b"} Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.708700 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x8scz" event={"ID":"a6fe30bd-a140-4309-9156-52d361049059","Type":"ContainerDied","Data":"f674dd0348bddec83931ad897ace598a56667bfa3598a7df66155b166879e1cc"} Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.708715 4851 scope.go:117] "RemoveContainer" containerID="f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.708798 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x8scz" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.723013 4851 scope.go:117] "RemoveContainer" containerID="f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b" Feb 23 13:22:07 crc kubenswrapper[4851]: E0223 13:22:07.724092 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b\": container with ID starting with f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b not found: ID does not exist" containerID="f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.724127 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b"} err="failed to get container status \"f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b\": rpc error: code = NotFound desc = could not find container \"f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b\": container with ID starting with f85050ca2e148749cf4a3b1c6b861c83cb06e839876b66a6852608704a04674b not found: ID does not exist" Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.756515 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x8scz"] Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.760212 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-x8scz"] Feb 23 13:22:07 crc kubenswrapper[4851]: I0223 13:22:07.976701 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6fe30bd-a140-4309-9156-52d361049059" path="/var/lib/kubelet/pods/a6fe30bd-a140-4309-9156-52d361049059/volumes" Feb 23 13:22:08 crc kubenswrapper[4851]: I0223 13:22:08.716003 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9x4j" event={"ID":"67ae49f4-e643-459d-934c-96aa760166b5","Type":"ContainerStarted","Data":"9ef093c3e937f80fa833e9e1a14a2b5adab99d98135b8d5e2e3b8b3307ef7000"} Feb 23 13:22:08 crc kubenswrapper[4851]: I0223 13:22:08.926533 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:08 crc kubenswrapper[4851]: I0223 13:22:08.997958 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-bundle\") pod \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " Feb 23 13:22:08 crc kubenswrapper[4851]: I0223 13:22:08.998637 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxvwl\" (UniqueName: \"kubernetes.io/projected/4f747063-8a9c-4fa9-8af3-4b832b22dd24-kube-api-access-sxvwl\") pod \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " Feb 23 13:22:08 crc kubenswrapper[4851]: I0223 13:22:08.998740 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-util\") pod \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\" (UID: \"4f747063-8a9c-4fa9-8af3-4b832b22dd24\") " Feb 23 13:22:08 crc kubenswrapper[4851]: I0223 13:22:08.999319 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-bundle" (OuterVolumeSpecName: "bundle") pod "4f747063-8a9c-4fa9-8af3-4b832b22dd24" (UID: "4f747063-8a9c-4fa9-8af3-4b832b22dd24"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.010620 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f747063-8a9c-4fa9-8af3-4b832b22dd24-kube-api-access-sxvwl" (OuterVolumeSpecName: "kube-api-access-sxvwl") pod "4f747063-8a9c-4fa9-8af3-4b832b22dd24" (UID: "4f747063-8a9c-4fa9-8af3-4b832b22dd24"). InnerVolumeSpecName "kube-api-access-sxvwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.011350 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-util" (OuterVolumeSpecName: "util") pod "4f747063-8a9c-4fa9-8af3-4b832b22dd24" (UID: "4f747063-8a9c-4fa9-8af3-4b832b22dd24"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.100749 4851 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.100797 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxvwl\" (UniqueName: \"kubernetes.io/projected/4f747063-8a9c-4fa9-8af3-4b832b22dd24-kube-api-access-sxvwl\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.100810 4851 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4f747063-8a9c-4fa9-8af3-4b832b22dd24-util\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.726950 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" event={"ID":"4f747063-8a9c-4fa9-8af3-4b832b22dd24","Type":"ContainerDied","Data":"877d69e48d7162db17e98627348c855638261fe4411ff80f99993b79cfd53eca"} Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.726997 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j" Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.727012 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="877d69e48d7162db17e98627348c855638261fe4411ff80f99993b79cfd53eca" Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.729033 4851 generic.go:334] "Generic (PLEG): container finished" podID="67ae49f4-e643-459d-934c-96aa760166b5" containerID="9ef093c3e937f80fa833e9e1a14a2b5adab99d98135b8d5e2e3b8b3307ef7000" exitCode=0 Feb 23 13:22:09 crc kubenswrapper[4851]: I0223 13:22:09.729081 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9x4j" event={"ID":"67ae49f4-e643-459d-934c-96aa760166b5","Type":"ContainerDied","Data":"9ef093c3e937f80fa833e9e1a14a2b5adab99d98135b8d5e2e3b8b3307ef7000"} Feb 23 13:22:10 crc kubenswrapper[4851]: I0223 13:22:10.735859 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9x4j" event={"ID":"67ae49f4-e643-459d-934c-96aa760166b5","Type":"ContainerStarted","Data":"657905048de2e48e2ee12f5606fec2e95b94d09b444391ba2c6eb9af47964b2e"} Feb 23 13:22:11 crc kubenswrapper[4851]: I0223 13:22:11.925169 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:22:11 crc kubenswrapper[4851]: I0223 13:22:11.925239 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.323622 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f9x4j" podStartSLOduration=7.883539041 podStartE2EDuration="10.323601489s" podCreationTimestamp="2026-02-23 13:22:06 +0000 UTC" firstStartedPulling="2026-02-23 13:22:07.708034459 +0000 UTC m=+882.389738137" lastFinishedPulling="2026-02-23 13:22:10.148096897 +0000 UTC m=+884.829800585" observedRunningTime="2026-02-23 13:22:10.756955504 +0000 UTC m=+885.438659212" watchObservedRunningTime="2026-02-23 13:22:16.323601489 +0000 UTC m=+891.005305177" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.326232 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mgxrn"] Feb 23 13:22:16 crc kubenswrapper[4851]: E0223 13:22:16.326555 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerName="pull" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.326602 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerName="pull" Feb 23 13:22:16 crc kubenswrapper[4851]: E0223 13:22:16.326613 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerName="extract" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.326622 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerName="extract" Feb 23 13:22:16 crc kubenswrapper[4851]: E0223 13:22:16.326638 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6fe30bd-a140-4309-9156-52d361049059" containerName="console" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.326647 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6fe30bd-a140-4309-9156-52d361049059" containerName="console" Feb 23 13:22:16 crc kubenswrapper[4851]: E0223 13:22:16.326662 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerName="util" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.326669 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerName="util" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.326801 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f747063-8a9c-4fa9-8af3-4b832b22dd24" containerName="extract" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.326814 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6fe30bd-a140-4309-9156-52d361049059" containerName="console" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.327691 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.338144 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgxrn"] Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.390224 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82nj\" (UniqueName: \"kubernetes.io/projected/b098cb52-4356-48bb-86a5-3063470b8d4c-kube-api-access-h82nj\") pod \"certified-operators-mgxrn\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.390290 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-catalog-content\") pod \"certified-operators-mgxrn\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.390314 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-utilities\") pod \"certified-operators-mgxrn\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.491688 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h82nj\" (UniqueName: \"kubernetes.io/projected/b098cb52-4356-48bb-86a5-3063470b8d4c-kube-api-access-h82nj\") pod \"certified-operators-mgxrn\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.491760 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-catalog-content\") pod \"certified-operators-mgxrn\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.491943 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-utilities\") pod \"certified-operators-mgxrn\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.492269 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-catalog-content\") pod \"certified-operators-mgxrn\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.492413 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-utilities\") pod \"certified-operators-mgxrn\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.514041 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h82nj\" (UniqueName: \"kubernetes.io/projected/b098cb52-4356-48bb-86a5-3063470b8d4c-kube-api-access-h82nj\") pod \"certified-operators-mgxrn\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:16 crc kubenswrapper[4851]: I0223 13:22:16.645668 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.133809 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mgxrn"] Feb 23 13:22:17 crc kubenswrapper[4851]: W0223 13:22:17.142575 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb098cb52_4356_48bb_86a5_3063470b8d4c.slice/crio-7e3ec852ffc8440c1b591d715c41693c25410559384d6eeb790d89d852807642 WatchSource:0}: Error finding container 7e3ec852ffc8440c1b591d715c41693c25410559384d6eeb790d89d852807642: Status 404 returned error can't find the container with id 7e3ec852ffc8440c1b591d715c41693c25410559384d6eeb790d89d852807642 Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.263706 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.263768 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.327946 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.559746 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v"] Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.560896 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.563257 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.567558 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7nlqx" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.567737 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.567779 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.567790 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.584353 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v"] Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.708351 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5tmt\" (UniqueName: \"kubernetes.io/projected/0926a535-2dd4-4e82-9bff-6f806330985a-kube-api-access-r5tmt\") pod \"metallb-operator-controller-manager-58d4d555d4-9b64v\" (UID: \"0926a535-2dd4-4e82-9bff-6f806330985a\") " pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.708403 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0926a535-2dd4-4e82-9bff-6f806330985a-webhook-cert\") pod \"metallb-operator-controller-manager-58d4d555d4-9b64v\" (UID: \"0926a535-2dd4-4e82-9bff-6f806330985a\") " pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.708448 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0926a535-2dd4-4e82-9bff-6f806330985a-apiservice-cert\") pod \"metallb-operator-controller-manager-58d4d555d4-9b64v\" (UID: \"0926a535-2dd4-4e82-9bff-6f806330985a\") " pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.771283 4851 generic.go:334] "Generic (PLEG): container finished" podID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerID="00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6" exitCode=0 Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.771378 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgxrn" event={"ID":"b098cb52-4356-48bb-86a5-3063470b8d4c","Type":"ContainerDied","Data":"00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6"} Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.772050 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgxrn" event={"ID":"b098cb52-4356-48bb-86a5-3063470b8d4c","Type":"ContainerStarted","Data":"7e3ec852ffc8440c1b591d715c41693c25410559384d6eeb790d89d852807642"} Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.809344 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5tmt\" (UniqueName: \"kubernetes.io/projected/0926a535-2dd4-4e82-9bff-6f806330985a-kube-api-access-r5tmt\") pod \"metallb-operator-controller-manager-58d4d555d4-9b64v\" (UID: \"0926a535-2dd4-4e82-9bff-6f806330985a\") " pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.809633 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0926a535-2dd4-4e82-9bff-6f806330985a-webhook-cert\") pod \"metallb-operator-controller-manager-58d4d555d4-9b64v\" (UID: \"0926a535-2dd4-4e82-9bff-6f806330985a\") " pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.809720 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0926a535-2dd4-4e82-9bff-6f806330985a-apiservice-cert\") pod \"metallb-operator-controller-manager-58d4d555d4-9b64v\" (UID: \"0926a535-2dd4-4e82-9bff-6f806330985a\") " pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.825584 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0926a535-2dd4-4e82-9bff-6f806330985a-apiservice-cert\") pod \"metallb-operator-controller-manager-58d4d555d4-9b64v\" (UID: \"0926a535-2dd4-4e82-9bff-6f806330985a\") " pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.829554 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0926a535-2dd4-4e82-9bff-6f806330985a-webhook-cert\") pod \"metallb-operator-controller-manager-58d4d555d4-9b64v\" (UID: \"0926a535-2dd4-4e82-9bff-6f806330985a\") " pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.834284 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5tmt\" (UniqueName: \"kubernetes.io/projected/0926a535-2dd4-4e82-9bff-6f806330985a-kube-api-access-r5tmt\") pod \"metallb-operator-controller-manager-58d4d555d4-9b64v\" (UID: \"0926a535-2dd4-4e82-9bff-6f806330985a\") " pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.835678 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.876319 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.938036 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v"] Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.938953 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.945628 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8pt9b" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.945632 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.945632 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 13:22:17 crc kubenswrapper[4851]: I0223 13:22:17.953846 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v"] Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.018413 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlb4\" (UniqueName: \"kubernetes.io/projected/e9ea5798-bfec-4380-b5db-eee20abfe719-kube-api-access-cxlb4\") pod \"metallb-operator-webhook-server-5d6f8cc6fd-wcv5v\" (UID: \"e9ea5798-bfec-4380-b5db-eee20abfe719\") " pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.018516 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9ea5798-bfec-4380-b5db-eee20abfe719-webhook-cert\") pod \"metallb-operator-webhook-server-5d6f8cc6fd-wcv5v\" (UID: \"e9ea5798-bfec-4380-b5db-eee20abfe719\") " pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.018571 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9ea5798-bfec-4380-b5db-eee20abfe719-apiservice-cert\") pod \"metallb-operator-webhook-server-5d6f8cc6fd-wcv5v\" (UID: \"e9ea5798-bfec-4380-b5db-eee20abfe719\") " pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.119357 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9ea5798-bfec-4380-b5db-eee20abfe719-webhook-cert\") pod \"metallb-operator-webhook-server-5d6f8cc6fd-wcv5v\" (UID: \"e9ea5798-bfec-4380-b5db-eee20abfe719\") " pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.119432 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9ea5798-bfec-4380-b5db-eee20abfe719-apiservice-cert\") pod \"metallb-operator-webhook-server-5d6f8cc6fd-wcv5v\" (UID: \"e9ea5798-bfec-4380-b5db-eee20abfe719\") " pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.119469 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxlb4\" (UniqueName: \"kubernetes.io/projected/e9ea5798-bfec-4380-b5db-eee20abfe719-kube-api-access-cxlb4\") pod \"metallb-operator-webhook-server-5d6f8cc6fd-wcv5v\" (UID: \"e9ea5798-bfec-4380-b5db-eee20abfe719\") " pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.126755 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9ea5798-bfec-4380-b5db-eee20abfe719-apiservice-cert\") pod \"metallb-operator-webhook-server-5d6f8cc6fd-wcv5v\" (UID: \"e9ea5798-bfec-4380-b5db-eee20abfe719\") " pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.132157 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9ea5798-bfec-4380-b5db-eee20abfe719-webhook-cert\") pod \"metallb-operator-webhook-server-5d6f8cc6fd-wcv5v\" (UID: \"e9ea5798-bfec-4380-b5db-eee20abfe719\") " pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.141602 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxlb4\" (UniqueName: \"kubernetes.io/projected/e9ea5798-bfec-4380-b5db-eee20abfe719-kube-api-access-cxlb4\") pod \"metallb-operator-webhook-server-5d6f8cc6fd-wcv5v\" (UID: \"e9ea5798-bfec-4380-b5db-eee20abfe719\") " pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.305833 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:18 crc kubenswrapper[4851]: I0223 13:22:18.377785 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v"] Feb 23 13:22:18 crc kubenswrapper[4851]: W0223 13:22:18.397226 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0926a535_2dd4_4e82_9bff_6f806330985a.slice/crio-5ef2ac58ec8644085f17ad80e9cabe3007ea0fe53c519add26dcea29e967414a WatchSource:0}: Error finding container 5ef2ac58ec8644085f17ad80e9cabe3007ea0fe53c519add26dcea29e967414a: Status 404 returned error can't find the container with id 5ef2ac58ec8644085f17ad80e9cabe3007ea0fe53c519add26dcea29e967414a Feb 23 13:22:19 crc kubenswrapper[4851]: I0223 13:22:18.574060 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v"] Feb 23 13:22:19 crc kubenswrapper[4851]: W0223 13:22:18.587833 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9ea5798_bfec_4380_b5db_eee20abfe719.slice/crio-5b14400b1dfa7abacbef71cd11ec76db6bbe20d66688864d7f891fa4e4673f1e WatchSource:0}: Error finding container 5b14400b1dfa7abacbef71cd11ec76db6bbe20d66688864d7f891fa4e4673f1e: Status 404 returned error can't find the container with id 5b14400b1dfa7abacbef71cd11ec76db6bbe20d66688864d7f891fa4e4673f1e Feb 23 13:22:19 crc kubenswrapper[4851]: I0223 13:22:18.777883 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" event={"ID":"0926a535-2dd4-4e82-9bff-6f806330985a","Type":"ContainerStarted","Data":"5ef2ac58ec8644085f17ad80e9cabe3007ea0fe53c519add26dcea29e967414a"} Feb 23 13:22:19 crc kubenswrapper[4851]: I0223 13:22:18.778957 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" event={"ID":"e9ea5798-bfec-4380-b5db-eee20abfe719","Type":"ContainerStarted","Data":"5b14400b1dfa7abacbef71cd11ec76db6bbe20d66688864d7f891fa4e4673f1e"} Feb 23 13:22:19 crc kubenswrapper[4851]: I0223 13:22:18.791376 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgxrn" event={"ID":"b098cb52-4356-48bb-86a5-3063470b8d4c","Type":"ContainerStarted","Data":"062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5"} Feb 23 13:22:19 crc kubenswrapper[4851]: I0223 13:22:19.797653 4851 generic.go:334] "Generic (PLEG): container finished" podID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerID="062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5" exitCode=0 Feb 23 13:22:19 crc kubenswrapper[4851]: I0223 13:22:19.797812 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgxrn" event={"ID":"b098cb52-4356-48bb-86a5-3063470b8d4c","Type":"ContainerDied","Data":"062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5"} Feb 23 13:22:20 crc kubenswrapper[4851]: I0223 13:22:20.804913 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgxrn" event={"ID":"b098cb52-4356-48bb-86a5-3063470b8d4c","Type":"ContainerStarted","Data":"fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a"} Feb 23 13:22:21 crc kubenswrapper[4851]: I0223 13:22:21.037361 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9x4j"] Feb 23 13:22:21 crc kubenswrapper[4851]: I0223 13:22:21.037568 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f9x4j" podUID="67ae49f4-e643-459d-934c-96aa760166b5" containerName="registry-server" containerID="cri-o://657905048de2e48e2ee12f5606fec2e95b94d09b444391ba2c6eb9af47964b2e" gracePeriod=2 Feb 23 13:22:21 crc kubenswrapper[4851]: I0223 13:22:21.818751 4851 generic.go:334] "Generic (PLEG): container finished" podID="67ae49f4-e643-459d-934c-96aa760166b5" containerID="657905048de2e48e2ee12f5606fec2e95b94d09b444391ba2c6eb9af47964b2e" exitCode=0 Feb 23 13:22:21 crc kubenswrapper[4851]: I0223 13:22:21.818837 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9x4j" event={"ID":"67ae49f4-e643-459d-934c-96aa760166b5","Type":"ContainerDied","Data":"657905048de2e48e2ee12f5606fec2e95b94d09b444391ba2c6eb9af47964b2e"} Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.125855 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.136770 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-utilities\") pod \"67ae49f4-e643-459d-934c-96aa760166b5\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.136839 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8mbr\" (UniqueName: \"kubernetes.io/projected/67ae49f4-e643-459d-934c-96aa760166b5-kube-api-access-k8mbr\") pod \"67ae49f4-e643-459d-934c-96aa760166b5\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.136894 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-catalog-content\") pod \"67ae49f4-e643-459d-934c-96aa760166b5\" (UID: \"67ae49f4-e643-459d-934c-96aa760166b5\") " Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.137536 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-utilities" (OuterVolumeSpecName: "utilities") pod "67ae49f4-e643-459d-934c-96aa760166b5" (UID: "67ae49f4-e643-459d-934c-96aa760166b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.144490 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ae49f4-e643-459d-934c-96aa760166b5-kube-api-access-k8mbr" (OuterVolumeSpecName: "kube-api-access-k8mbr") pod "67ae49f4-e643-459d-934c-96aa760166b5" (UID: "67ae49f4-e643-459d-934c-96aa760166b5"). InnerVolumeSpecName "kube-api-access-k8mbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.145881 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mgxrn" podStartSLOduration=4.607381162 podStartE2EDuration="7.145861379s" podCreationTimestamp="2026-02-23 13:22:16 +0000 UTC" firstStartedPulling="2026-02-23 13:22:17.773052678 +0000 UTC m=+892.454756356" lastFinishedPulling="2026-02-23 13:22:20.311532885 +0000 UTC m=+894.993236573" observedRunningTime="2026-02-23 13:22:21.265251601 +0000 UTC m=+895.946955279" watchObservedRunningTime="2026-02-23 13:22:23.145861379 +0000 UTC m=+897.827565057" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.189014 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67ae49f4-e643-459d-934c-96aa760166b5" (UID: "67ae49f4-e643-459d-934c-96aa760166b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.237809 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.237851 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ae49f4-e643-459d-934c-96aa760166b5-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.237862 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8mbr\" (UniqueName: \"kubernetes.io/projected/67ae49f4-e643-459d-934c-96aa760166b5-kube-api-access-k8mbr\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.961465 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f9x4j" event={"ID":"67ae49f4-e643-459d-934c-96aa760166b5","Type":"ContainerDied","Data":"c71d33dc231c39f04f85fc6a4eff6d7155565c4e7b2099e2318d86689dfe498d"} Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.961525 4851 scope.go:117] "RemoveContainer" containerID="657905048de2e48e2ee12f5606fec2e95b94d09b444391ba2c6eb9af47964b2e" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.961667 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f9x4j" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.988710 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:23 crc kubenswrapper[4851]: I0223 13:22:23.988740 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" event={"ID":"0926a535-2dd4-4e82-9bff-6f806330985a","Type":"ContainerStarted","Data":"fb320fb1c9eddf6555d07786524a82db6a687da6955f6ce07606a4308b0ae2f2"} Feb 23 13:22:24 crc kubenswrapper[4851]: I0223 13:22:24.041590 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" podStartSLOduration=2.474883926 podStartE2EDuration="7.041573185s" podCreationTimestamp="2026-02-23 13:22:17 +0000 UTC" firstStartedPulling="2026-02-23 13:22:18.4096327 +0000 UTC m=+893.091336378" lastFinishedPulling="2026-02-23 13:22:22.976321959 +0000 UTC m=+897.658025637" observedRunningTime="2026-02-23 13:22:24.039724412 +0000 UTC m=+898.721428120" watchObservedRunningTime="2026-02-23 13:22:24.041573185 +0000 UTC m=+898.723276863" Feb 23 13:22:24 crc kubenswrapper[4851]: I0223 13:22:24.071414 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9x4j"] Feb 23 13:22:24 crc kubenswrapper[4851]: I0223 13:22:24.075160 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f9x4j"] Feb 23 13:22:25 crc kubenswrapper[4851]: I0223 13:22:25.796530 4851 scope.go:117] "RemoveContainer" containerID="9ef093c3e937f80fa833e9e1a14a2b5adab99d98135b8d5e2e3b8b3307ef7000" Feb 23 13:22:25 crc kubenswrapper[4851]: I0223 13:22:25.841739 4851 scope.go:117] "RemoveContainer" containerID="d82ad8ae22297fe1edc67596234d6ca9b6d0e39aef80bbf0e3384c7ac5252ba4" Feb 23 13:22:25 crc kubenswrapper[4851]: I0223 13:22:25.977994 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ae49f4-e643-459d-934c-96aa760166b5" path="/var/lib/kubelet/pods/67ae49f4-e643-459d-934c-96aa760166b5/volumes" Feb 23 13:22:26 crc kubenswrapper[4851]: I0223 13:22:26.646508 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:26 crc kubenswrapper[4851]: I0223 13:22:26.646870 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:26 crc kubenswrapper[4851]: I0223 13:22:26.688818 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:26 crc kubenswrapper[4851]: I0223 13:22:26.997812 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" event={"ID":"e9ea5798-bfec-4380-b5db-eee20abfe719","Type":"ContainerStarted","Data":"a7c10fda94767cf8fcdae390ef99884fb73d0ac48f9d95f9c65d40264f607e01"} Feb 23 13:22:26 crc kubenswrapper[4851]: I0223 13:22:26.997870 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:27 crc kubenswrapper[4851]: I0223 13:22:27.019029 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" podStartSLOduration=2.739715537 podStartE2EDuration="10.019011366s" podCreationTimestamp="2026-02-23 13:22:17 +0000 UTC" firstStartedPulling="2026-02-23 13:22:18.595055673 +0000 UTC m=+893.276759351" lastFinishedPulling="2026-02-23 13:22:25.874351492 +0000 UTC m=+900.556055180" observedRunningTime="2026-02-23 13:22:27.017280238 +0000 UTC m=+901.698983976" watchObservedRunningTime="2026-02-23 13:22:27.019011366 +0000 UTC m=+901.700715044" Feb 23 13:22:27 crc kubenswrapper[4851]: I0223 13:22:27.041213 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.119245 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgxrn"] Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.120708 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mgxrn" podUID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerName="registry-server" containerID="cri-o://fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a" gracePeriod=2 Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.519280 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.628765 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h82nj\" (UniqueName: \"kubernetes.io/projected/b098cb52-4356-48bb-86a5-3063470b8d4c-kube-api-access-h82nj\") pod \"b098cb52-4356-48bb-86a5-3063470b8d4c\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.628917 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-catalog-content\") pod \"b098cb52-4356-48bb-86a5-3063470b8d4c\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.628948 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-utilities\") pod \"b098cb52-4356-48bb-86a5-3063470b8d4c\" (UID: \"b098cb52-4356-48bb-86a5-3063470b8d4c\") " Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.629742 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-utilities" (OuterVolumeSpecName: "utilities") pod "b098cb52-4356-48bb-86a5-3063470b8d4c" (UID: "b098cb52-4356-48bb-86a5-3063470b8d4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.634405 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b098cb52-4356-48bb-86a5-3063470b8d4c-kube-api-access-h82nj" (OuterVolumeSpecName: "kube-api-access-h82nj") pod "b098cb52-4356-48bb-86a5-3063470b8d4c" (UID: "b098cb52-4356-48bb-86a5-3063470b8d4c"). InnerVolumeSpecName "kube-api-access-h82nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.730667 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h82nj\" (UniqueName: \"kubernetes.io/projected/b098cb52-4356-48bb-86a5-3063470b8d4c-kube-api-access-h82nj\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.730704 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.878293 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b098cb52-4356-48bb-86a5-3063470b8d4c" (UID: "b098cb52-4356-48bb-86a5-3063470b8d4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:22:29 crc kubenswrapper[4851]: I0223 13:22:29.933192 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b098cb52-4356-48bb-86a5-3063470b8d4c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.012460 4851 generic.go:334] "Generic (PLEG): container finished" podID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerID="fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a" exitCode=0 Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.012508 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mgxrn" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.012509 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgxrn" event={"ID":"b098cb52-4356-48bb-86a5-3063470b8d4c","Type":"ContainerDied","Data":"fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a"} Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.012538 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mgxrn" event={"ID":"b098cb52-4356-48bb-86a5-3063470b8d4c","Type":"ContainerDied","Data":"7e3ec852ffc8440c1b591d715c41693c25410559384d6eeb790d89d852807642"} Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.012556 4851 scope.go:117] "RemoveContainer" containerID="fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.030024 4851 scope.go:117] "RemoveContainer" containerID="062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.030424 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mgxrn"] Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.034664 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mgxrn"] Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.044413 4851 scope.go:117] "RemoveContainer" containerID="00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.076242 4851 scope.go:117] "RemoveContainer" containerID="fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a" Feb 23 13:22:30 crc kubenswrapper[4851]: E0223 13:22:30.077120 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a\": container with ID starting with fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a not found: ID does not exist" containerID="fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.077206 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a"} err="failed to get container status \"fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a\": rpc error: code = NotFound desc = could not find container \"fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a\": container with ID starting with fa60ff192d82cbc8ce027c792a81ed5e302a9ce41053de5acea5581a0417d57a not found: ID does not exist" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.077254 4851 scope.go:117] "RemoveContainer" containerID="062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5" Feb 23 13:22:30 crc kubenswrapper[4851]: E0223 13:22:30.077915 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5\": container with ID starting with 062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5 not found: ID does not exist" containerID="062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.077964 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5"} err="failed to get container status \"062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5\": rpc error: code = NotFound desc = could not find container \"062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5\": container with ID starting with 062e91d3659c7525e5f31e262118d8f1b2c0df5b867b9c5bc4bfbabb5a0329a5 not found: ID does not exist" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.077996 4851 scope.go:117] "RemoveContainer" containerID="00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6" Feb 23 13:22:30 crc kubenswrapper[4851]: E0223 13:22:30.078306 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6\": container with ID starting with 00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6 not found: ID does not exist" containerID="00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6" Feb 23 13:22:30 crc kubenswrapper[4851]: I0223 13:22:30.078371 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6"} err="failed to get container status \"00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6\": rpc error: code = NotFound desc = could not find container \"00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6\": container with ID starting with 00b57575a9e38fc53fd3815803c47ad089160810993e75ab459e35a907b1a9c6 not found: ID does not exist" Feb 23 13:22:31 crc kubenswrapper[4851]: I0223 13:22:31.974679 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b098cb52-4356-48bb-86a5-3063470b8d4c" path="/var/lib/kubelet/pods/b098cb52-4356-48bb-86a5-3063470b8d4c/volumes" Feb 23 13:22:38 crc kubenswrapper[4851]: I0223 13:22:38.312239 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5d6f8cc6fd-wcv5v" Feb 23 13:22:41 crc kubenswrapper[4851]: I0223 13:22:41.925117 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:22:41 crc kubenswrapper[4851]: I0223 13:22:41.925435 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:22:57 crc kubenswrapper[4851]: I0223 13:22:57.880595 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58d4d555d4-9b64v" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.713996 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hb2x8"] Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.714669 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerName="registry-server" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.714692 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerName="registry-server" Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.714704 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ae49f4-e643-459d-934c-96aa760166b5" containerName="registry-server" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.714712 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ae49f4-e643-459d-934c-96aa760166b5" containerName="registry-server" Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.714727 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ae49f4-e643-459d-934c-96aa760166b5" containerName="extract-content" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.714735 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ae49f4-e643-459d-934c-96aa760166b5" containerName="extract-content" Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.714742 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerName="extract-content" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.714750 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerName="extract-content" Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.714763 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerName="extract-utilities" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.714771 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerName="extract-utilities" Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.714786 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ae49f4-e643-459d-934c-96aa760166b5" containerName="extract-utilities" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.714793 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ae49f4-e643-459d-934c-96aa760166b5" containerName="extract-utilities" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.714908 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b098cb52-4356-48bb-86a5-3063470b8d4c" containerName="registry-server" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.714929 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ae49f4-e643-459d-934c-96aa760166b5" containerName="registry-server" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.717169 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.720723 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.721284 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-pqwbl" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.721291 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.723221 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9"] Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.724522 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.726016 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.734423 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9"] Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.795754 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fvlxq"] Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.796659 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.799055 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.799245 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.799397 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-r94cv" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.799522 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.803941 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-88gf5"] Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.805165 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.813533 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815130 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4tbn\" (UniqueName: \"kubernetes.io/projected/21b51896-5127-4eef-8f88-87b1e811103c-kube-api-access-z4tbn\") pod \"frr-k8s-webhook-server-78b44bf5bb-8qsz9\" (UID: \"21b51896-5127-4eef-8f88-87b1e811103c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815176 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-frr-conf\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815196 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-frr-sockets\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815224 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlsmn\" (UniqueName: \"kubernetes.io/projected/44188c33-1cb1-4c27-8314-4431469de3bb-kube-api-access-xlsmn\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815244 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmb4\" (UniqueName: \"kubernetes.io/projected/033fbbfa-b771-4acb-a64c-7212064277b3-kube-api-access-zmmb4\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815269 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/033fbbfa-b771-4acb-a64c-7212064277b3-metrics-certs\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815283 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq8n7\" (UniqueName: \"kubernetes.io/projected/2911e001-3b48-4ffc-9681-100739828235-kube-api-access-cq8n7\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815319 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21b51896-5127-4eef-8f88-87b1e811103c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-8qsz9\" (UID: \"21b51896-5127-4eef-8f88-87b1e811103c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815355 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-memberlist\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815375 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/44188c33-1cb1-4c27-8314-4431469de3bb-metallb-excludel2\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815389 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2911e001-3b48-4ffc-9681-100739828235-cert\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815409 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/033fbbfa-b771-4acb-a64c-7212064277b3-frr-startup\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815434 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-metrics\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815449 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-metrics-certs\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815478 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-reloader\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.815495 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2911e001-3b48-4ffc-9681-100739828235-metrics-certs\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.816979 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-88gf5"] Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916285 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21b51896-5127-4eef-8f88-87b1e811103c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-8qsz9\" (UID: \"21b51896-5127-4eef-8f88-87b1e811103c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916357 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-memberlist\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916382 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2911e001-3b48-4ffc-9681-100739828235-cert\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916405 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/44188c33-1cb1-4c27-8314-4431469de3bb-metallb-excludel2\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916434 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/033fbbfa-b771-4acb-a64c-7212064277b3-frr-startup\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916470 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-metrics\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916489 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-metrics-certs\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916523 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-reloader\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.916541 4851 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.916636 4851 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916546 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2911e001-3b48-4ffc-9681-100739828235-metrics-certs\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.916642 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-memberlist podName:44188c33-1cb1-4c27-8314-4431469de3bb nodeName:}" failed. No retries permitted until 2026-02-23 13:22:59.416611572 +0000 UTC m=+934.098315250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-memberlist") pod "speaker-fvlxq" (UID: "44188c33-1cb1-4c27-8314-4431469de3bb") : secret "metallb-memberlist" not found Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.916823 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2911e001-3b48-4ffc-9681-100739828235-metrics-certs podName:2911e001-3b48-4ffc-9681-100739828235 nodeName:}" failed. No retries permitted until 2026-02-23 13:22:59.416799338 +0000 UTC m=+934.098503016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2911e001-3b48-4ffc-9681-100739828235-metrics-certs") pod "controller-69bbfbf88f-88gf5" (UID: "2911e001-3b48-4ffc-9681-100739828235") : secret "controller-certs-secret" not found Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916855 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4tbn\" (UniqueName: \"kubernetes.io/projected/21b51896-5127-4eef-8f88-87b1e811103c-kube-api-access-z4tbn\") pod \"frr-k8s-webhook-server-78b44bf5bb-8qsz9\" (UID: \"21b51896-5127-4eef-8f88-87b1e811103c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916884 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-frr-conf\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916904 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-frr-sockets\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916946 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlsmn\" (UniqueName: \"kubernetes.io/projected/44188c33-1cb1-4c27-8314-4431469de3bb-kube-api-access-xlsmn\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.916979 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmmb4\" (UniqueName: \"kubernetes.io/projected/033fbbfa-b771-4acb-a64c-7212064277b3-kube-api-access-zmmb4\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.917021 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/033fbbfa-b771-4acb-a64c-7212064277b3-metrics-certs\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.917036 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq8n7\" (UniqueName: \"kubernetes.io/projected/2911e001-3b48-4ffc-9681-100739828235-kube-api-access-cq8n7\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.917090 4851 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.917129 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-reloader\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: E0223 13:22:58.917144 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/033fbbfa-b771-4acb-a64c-7212064277b3-metrics-certs podName:033fbbfa-b771-4acb-a64c-7212064277b3 nodeName:}" failed. No retries permitted until 2026-02-23 13:22:59.417127777 +0000 UTC m=+934.098831455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/033fbbfa-b771-4acb-a64c-7212064277b3-metrics-certs") pod "frr-k8s-hb2x8" (UID: "033fbbfa-b771-4acb-a64c-7212064277b3") : secret "frr-k8s-certs-secret" not found Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.917022 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-metrics\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.917301 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/44188c33-1cb1-4c27-8314-4431469de3bb-metallb-excludel2\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.917575 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-frr-conf\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.917962 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/033fbbfa-b771-4acb-a64c-7212064277b3-frr-sockets\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.919528 4851 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.919828 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/033fbbfa-b771-4acb-a64c-7212064277b3-frr-startup\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.922391 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21b51896-5127-4eef-8f88-87b1e811103c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-8qsz9\" (UID: \"21b51896-5127-4eef-8f88-87b1e811103c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.924017 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-metrics-certs\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.931916 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2911e001-3b48-4ffc-9681-100739828235-cert\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.941098 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlsmn\" (UniqueName: \"kubernetes.io/projected/44188c33-1cb1-4c27-8314-4431469de3bb-kube-api-access-xlsmn\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.941293 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmmb4\" (UniqueName: \"kubernetes.io/projected/033fbbfa-b771-4acb-a64c-7212064277b3-kube-api-access-zmmb4\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.941588 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq8n7\" (UniqueName: \"kubernetes.io/projected/2911e001-3b48-4ffc-9681-100739828235-kube-api-access-cq8n7\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:58 crc kubenswrapper[4851]: I0223 13:22:58.945915 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4tbn\" (UniqueName: \"kubernetes.io/projected/21b51896-5127-4eef-8f88-87b1e811103c-kube-api-access-z4tbn\") pod \"frr-k8s-webhook-server-78b44bf5bb-8qsz9\" (UID: \"21b51896-5127-4eef-8f88-87b1e811103c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.047322 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.422851 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2911e001-3b48-4ffc-9681-100739828235-metrics-certs\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.423267 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/033fbbfa-b771-4acb-a64c-7212064277b3-metrics-certs\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.423302 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-memberlist\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:22:59 crc kubenswrapper[4851]: E0223 13:22:59.423484 4851 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 13:22:59 crc kubenswrapper[4851]: E0223 13:22:59.423544 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-memberlist podName:44188c33-1cb1-4c27-8314-4431469de3bb nodeName:}" failed. No retries permitted until 2026-02-23 13:23:00.423529081 +0000 UTC m=+935.105232759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-memberlist") pod "speaker-fvlxq" (UID: "44188c33-1cb1-4c27-8314-4431469de3bb") : secret "metallb-memberlist" not found Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.428174 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2911e001-3b48-4ffc-9681-100739828235-metrics-certs\") pod \"controller-69bbfbf88f-88gf5\" (UID: \"2911e001-3b48-4ffc-9681-100739828235\") " pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.428431 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/033fbbfa-b771-4acb-a64c-7212064277b3-metrics-certs\") pod \"frr-k8s-hb2x8\" (UID: \"033fbbfa-b771-4acb-a64c-7212064277b3\") " pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.473321 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9"] Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.633464 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.724378 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:22:59 crc kubenswrapper[4851]: I0223 13:22:59.899770 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-88gf5"] Feb 23 13:22:59 crc kubenswrapper[4851]: W0223 13:22:59.906539 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2911e001_3b48_4ffc_9681_100739828235.slice/crio-959ff813fb5ea2e0818f98de68a38f45a1b015e18117a25807bb17ae406b46ad WatchSource:0}: Error finding container 959ff813fb5ea2e0818f98de68a38f45a1b015e18117a25807bb17ae406b46ad: Status 404 returned error can't find the container with id 959ff813fb5ea2e0818f98de68a38f45a1b015e18117a25807bb17ae406b46ad Feb 23 13:23:00 crc kubenswrapper[4851]: I0223 13:23:00.212650 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-88gf5" event={"ID":"2911e001-3b48-4ffc-9681-100739828235","Type":"ContainerStarted","Data":"f60806c6653456a31d4c48983f5f00cc26a0edd7b30622f396fb06bd30ab6cd9"} Feb 23 13:23:00 crc kubenswrapper[4851]: I0223 13:23:00.212974 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:23:00 crc kubenswrapper[4851]: I0223 13:23:00.212988 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-88gf5" event={"ID":"2911e001-3b48-4ffc-9681-100739828235","Type":"ContainerStarted","Data":"296ed752be2020b94c08a2fd53e9c65ff4a76f56fef258da861c98b34cce6a7e"} Feb 23 13:23:00 crc kubenswrapper[4851]: I0223 13:23:00.212999 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-88gf5" event={"ID":"2911e001-3b48-4ffc-9681-100739828235","Type":"ContainerStarted","Data":"959ff813fb5ea2e0818f98de68a38f45a1b015e18117a25807bb17ae406b46ad"} Feb 23 13:23:00 crc kubenswrapper[4851]: I0223 13:23:00.214511 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" event={"ID":"21b51896-5127-4eef-8f88-87b1e811103c","Type":"ContainerStarted","Data":"103c83db7ccc08938efbb84e31375257051a3305b530a190767572004e63d8f7"} Feb 23 13:23:00 crc kubenswrapper[4851]: I0223 13:23:00.215498 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerStarted","Data":"8b85c27377e4ff07f081a13872f8fe1db65fd1537c5b2fc552c487965dbacfd7"} Feb 23 13:23:00 crc kubenswrapper[4851]: I0223 13:23:00.447250 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-memberlist\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:23:00 crc kubenswrapper[4851]: I0223 13:23:00.453865 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/44188c33-1cb1-4c27-8314-4431469de3bb-memberlist\") pod \"speaker-fvlxq\" (UID: \"44188c33-1cb1-4c27-8314-4431469de3bb\") " pod="metallb-system/speaker-fvlxq" Feb 23 13:23:00 crc kubenswrapper[4851]: I0223 13:23:00.614556 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fvlxq" Feb 23 13:23:01 crc kubenswrapper[4851]: I0223 13:23:01.242900 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fvlxq" event={"ID":"44188c33-1cb1-4c27-8314-4431469de3bb","Type":"ContainerStarted","Data":"fd67493411319dedc12eb22a3c8da9e84a3e8c4967e99c7f41af7ad51ff104a8"} Feb 23 13:23:01 crc kubenswrapper[4851]: I0223 13:23:01.243213 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fvlxq" event={"ID":"44188c33-1cb1-4c27-8314-4431469de3bb","Type":"ContainerStarted","Data":"98a99d1630007770599e5fcdefbc2f04ad0497ef3366ecf31e045dfe94fb39a4"} Feb 23 13:23:02 crc kubenswrapper[4851]: I0223 13:23:02.254972 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fvlxq" event={"ID":"44188c33-1cb1-4c27-8314-4431469de3bb","Type":"ContainerStarted","Data":"af97b8525ac9bf9816ec2fb0e1c58c52f5ed74adbefdd369a94c1fca7d9af15d"} Feb 23 13:23:02 crc kubenswrapper[4851]: I0223 13:23:02.255518 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fvlxq" Feb 23 13:23:02 crc kubenswrapper[4851]: I0223 13:23:02.280527 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fvlxq" podStartSLOduration=4.280506201 podStartE2EDuration="4.280506201s" podCreationTimestamp="2026-02-23 13:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:23:02.277778613 +0000 UTC m=+936.959482291" watchObservedRunningTime="2026-02-23 13:23:02.280506201 +0000 UTC m=+936.962209879" Feb 23 13:23:02 crc kubenswrapper[4851]: I0223 13:23:02.281959 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-88gf5" podStartSLOduration=4.281952752 podStartE2EDuration="4.281952752s" podCreationTimestamp="2026-02-23 13:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:23:00.225986362 +0000 UTC m=+934.907690050" watchObservedRunningTime="2026-02-23 13:23:02.281952752 +0000 UTC m=+936.963656420" Feb 23 13:23:07 crc kubenswrapper[4851]: I0223 13:23:07.288928 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" event={"ID":"21b51896-5127-4eef-8f88-87b1e811103c","Type":"ContainerStarted","Data":"f8065e8f3bf59e89f642feec75463918f6fd6e64a7f93792dcd5c1077a6ae90a"} Feb 23 13:23:07 crc kubenswrapper[4851]: I0223 13:23:07.289515 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:23:07 crc kubenswrapper[4851]: I0223 13:23:07.290486 4851 generic.go:334] "Generic (PLEG): container finished" podID="033fbbfa-b771-4acb-a64c-7212064277b3" containerID="3a3220ca704243a4ef8d31d24419140e140b6fc2d2cba80b89524422a2b1f3d4" exitCode=0 Feb 23 13:23:07 crc kubenswrapper[4851]: I0223 13:23:07.290509 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerDied","Data":"3a3220ca704243a4ef8d31d24419140e140b6fc2d2cba80b89524422a2b1f3d4"} Feb 23 13:23:07 crc kubenswrapper[4851]: I0223 13:23:07.313858 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" podStartSLOduration=2.298767364 podStartE2EDuration="9.313320553s" podCreationTimestamp="2026-02-23 13:22:58 +0000 UTC" firstStartedPulling="2026-02-23 13:22:59.482233154 +0000 UTC m=+934.163936832" lastFinishedPulling="2026-02-23 13:23:06.496786353 +0000 UTC m=+941.178490021" observedRunningTime="2026-02-23 13:23:07.30684497 +0000 UTC m=+941.988548658" watchObservedRunningTime="2026-02-23 13:23:07.313320553 +0000 UTC m=+941.995024241" Feb 23 13:23:08 crc kubenswrapper[4851]: I0223 13:23:08.297410 4851 generic.go:334] "Generic (PLEG): container finished" podID="033fbbfa-b771-4acb-a64c-7212064277b3" containerID="aa4c30b962da83d906026e8329a4815036b2de2280dcf392f78ece3aee026065" exitCode=0 Feb 23 13:23:08 crc kubenswrapper[4851]: I0223 13:23:08.297463 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerDied","Data":"aa4c30b962da83d906026e8329a4815036b2de2280dcf392f78ece3aee026065"} Feb 23 13:23:09 crc kubenswrapper[4851]: I0223 13:23:09.308264 4851 generic.go:334] "Generic (PLEG): container finished" podID="033fbbfa-b771-4acb-a64c-7212064277b3" containerID="338e3f82870595b549241e716454a733c6b95642954973b89c2f58c50d1da1a7" exitCode=0 Feb 23 13:23:09 crc kubenswrapper[4851]: I0223 13:23:09.308409 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerDied","Data":"338e3f82870595b549241e716454a733c6b95642954973b89c2f58c50d1da1a7"} Feb 23 13:23:10 crc kubenswrapper[4851]: I0223 13:23:10.315831 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerStarted","Data":"ab8a44f1cef45b2e0f4d2847ae715c04fcb234823fec79dd25662e4d31666fdf"} Feb 23 13:23:10 crc kubenswrapper[4851]: I0223 13:23:10.316177 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerStarted","Data":"3cf47fdc4fa9fd57364fc02d38d8dc5e0a3d4991e223372a2a841114f7695787"} Feb 23 13:23:10 crc kubenswrapper[4851]: I0223 13:23:10.316192 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerStarted","Data":"ddf6307825953152c277b06195f8f99ff595c64691a0881173a64e529ef76c85"} Feb 23 13:23:10 crc kubenswrapper[4851]: I0223 13:23:10.316203 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerStarted","Data":"b83803bdb6eedf3df981bd0fd25ad5c7df4d230fd308609b92cea39238d83e9f"} Feb 23 13:23:10 crc kubenswrapper[4851]: I0223 13:23:10.316212 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerStarted","Data":"2759190129bd633b882153c73cf046bfe3bb20991db14fe017e57c5c3871784e"} Feb 23 13:23:10 crc kubenswrapper[4851]: I0223 13:23:10.316222 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hb2x8" event={"ID":"033fbbfa-b771-4acb-a64c-7212064277b3","Type":"ContainerStarted","Data":"d3711ac2e6026bb98f66b0908e9631bd9e44b107d6adb9188107c6bb617f1a3a"} Feb 23 13:23:10 crc kubenswrapper[4851]: I0223 13:23:10.316257 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:23:10 crc kubenswrapper[4851]: I0223 13:23:10.334290 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hb2x8" podStartSLOduration=5.558955055 podStartE2EDuration="12.334276437s" podCreationTimestamp="2026-02-23 13:22:58 +0000 UTC" firstStartedPulling="2026-02-23 13:22:59.724708973 +0000 UTC m=+934.406412651" lastFinishedPulling="2026-02-23 13:23:06.500030355 +0000 UTC m=+941.181734033" observedRunningTime="2026-02-23 13:23:10.334005749 +0000 UTC m=+945.015709437" watchObservedRunningTime="2026-02-23 13:23:10.334276437 +0000 UTC m=+945.015980115" Feb 23 13:23:10 crc kubenswrapper[4851]: I0223 13:23:10.619804 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fvlxq" Feb 23 13:23:11 crc kubenswrapper[4851]: I0223 13:23:11.925281 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:23:11 crc kubenswrapper[4851]: I0223 13:23:11.925603 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:23:11 crc kubenswrapper[4851]: I0223 13:23:11.925655 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:23:11 crc kubenswrapper[4851]: I0223 13:23:11.926161 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b56c77e1de63323e9342dc73ec952ed4e450a54675ac5d33629ae895364039c"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:23:11 crc kubenswrapper[4851]: I0223 13:23:11.926207 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://1b56c77e1de63323e9342dc73ec952ed4e450a54675ac5d33629ae895364039c" gracePeriod=600 Feb 23 13:23:12 crc kubenswrapper[4851]: I0223 13:23:12.329235 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="1b56c77e1de63323e9342dc73ec952ed4e450a54675ac5d33629ae895364039c" exitCode=0 Feb 23 13:23:12 crc kubenswrapper[4851]: I0223 13:23:12.329317 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"1b56c77e1de63323e9342dc73ec952ed4e450a54675ac5d33629ae895364039c"} Feb 23 13:23:12 crc kubenswrapper[4851]: I0223 13:23:12.329408 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"60927ae79050568035bcfc1c3f4be4f3b0b6745f639bddbc9d3c155365093c4b"} Feb 23 13:23:12 crc kubenswrapper[4851]: I0223 13:23:12.329428 4851 scope.go:117] "RemoveContainer" containerID="99b977156f80b246ca5cd408d8663acbebd25daf86a562bb337535d76fe02c36" Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.233382 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kcrgn"] Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.234368 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kcrgn" Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.236004 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.236124 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5cmrb" Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.236540 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.282037 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kcrgn"] Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.362792 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rczqp\" (UniqueName: \"kubernetes.io/projected/60c05508-531e-4283-8b93-2619243c5f41-kube-api-access-rczqp\") pod \"openstack-operator-index-kcrgn\" (UID: \"60c05508-531e-4283-8b93-2619243c5f41\") " pod="openstack-operators/openstack-operator-index-kcrgn" Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.463719 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczqp\" (UniqueName: \"kubernetes.io/projected/60c05508-531e-4283-8b93-2619243c5f41-kube-api-access-rczqp\") pod \"openstack-operator-index-kcrgn\" (UID: \"60c05508-531e-4283-8b93-2619243c5f41\") " pod="openstack-operators/openstack-operator-index-kcrgn" Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.481070 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczqp\" (UniqueName: \"kubernetes.io/projected/60c05508-531e-4283-8b93-2619243c5f41-kube-api-access-rczqp\") pod \"openstack-operator-index-kcrgn\" (UID: \"60c05508-531e-4283-8b93-2619243c5f41\") " pod="openstack-operators/openstack-operator-index-kcrgn" Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.562543 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kcrgn" Feb 23 13:23:13 crc kubenswrapper[4851]: I0223 13:23:13.983615 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kcrgn"] Feb 23 13:23:14 crc kubenswrapper[4851]: I0223 13:23:14.344646 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kcrgn" event={"ID":"60c05508-531e-4283-8b93-2619243c5f41","Type":"ContainerStarted","Data":"8e7b2e7a158696403852c73a7c2c695e3cd231d7df3a178f85a4ffe9b6f70231"} Feb 23 13:23:14 crc kubenswrapper[4851]: I0223 13:23:14.634268 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:23:14 crc kubenswrapper[4851]: I0223 13:23:14.674250 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:23:16 crc kubenswrapper[4851]: I0223 13:23:16.414787 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kcrgn"] Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.018891 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rfcf2"] Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.020068 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rfcf2" Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.027077 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rfcf2"] Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.110427 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mh2\" (UniqueName: \"kubernetes.io/projected/edc46ca6-ff8f-4f31-981d-633b7a3766b1-kube-api-access-78mh2\") pod \"openstack-operator-index-rfcf2\" (UID: \"edc46ca6-ff8f-4f31-981d-633b7a3766b1\") " pod="openstack-operators/openstack-operator-index-rfcf2" Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.212069 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78mh2\" (UniqueName: \"kubernetes.io/projected/edc46ca6-ff8f-4f31-981d-633b7a3766b1-kube-api-access-78mh2\") pod \"openstack-operator-index-rfcf2\" (UID: \"edc46ca6-ff8f-4f31-981d-633b7a3766b1\") " pod="openstack-operators/openstack-operator-index-rfcf2" Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.230178 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78mh2\" (UniqueName: \"kubernetes.io/projected/edc46ca6-ff8f-4f31-981d-633b7a3766b1-kube-api-access-78mh2\") pod \"openstack-operator-index-rfcf2\" (UID: \"edc46ca6-ff8f-4f31-981d-633b7a3766b1\") " pod="openstack-operators/openstack-operator-index-rfcf2" Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.335202 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rfcf2" Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.370405 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kcrgn" event={"ID":"60c05508-531e-4283-8b93-2619243c5f41","Type":"ContainerStarted","Data":"56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686"} Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.370586 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kcrgn" podUID="60c05508-531e-4283-8b93-2619243c5f41" containerName="registry-server" containerID="cri-o://56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686" gracePeriod=2 Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.395595 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kcrgn" podStartSLOduration=1.9262842629999999 podStartE2EDuration="4.395575561s" podCreationTimestamp="2026-02-23 13:23:13 +0000 UTC" firstStartedPulling="2026-02-23 13:23:13.988217541 +0000 UTC m=+948.669921219" lastFinishedPulling="2026-02-23 13:23:16.457508839 +0000 UTC m=+951.139212517" observedRunningTime="2026-02-23 13:23:17.391072693 +0000 UTC m=+952.072776391" watchObservedRunningTime="2026-02-23 13:23:17.395575561 +0000 UTC m=+952.077279249" Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.561830 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rfcf2"] Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.745437 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kcrgn" Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.820894 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rczqp\" (UniqueName: \"kubernetes.io/projected/60c05508-531e-4283-8b93-2619243c5f41-kube-api-access-rczqp\") pod \"60c05508-531e-4283-8b93-2619243c5f41\" (UID: \"60c05508-531e-4283-8b93-2619243c5f41\") " Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.826550 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c05508-531e-4283-8b93-2619243c5f41-kube-api-access-rczqp" (OuterVolumeSpecName: "kube-api-access-rczqp") pod "60c05508-531e-4283-8b93-2619243c5f41" (UID: "60c05508-531e-4283-8b93-2619243c5f41"). InnerVolumeSpecName "kube-api-access-rczqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:23:17 crc kubenswrapper[4851]: I0223 13:23:17.922196 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rczqp\" (UniqueName: \"kubernetes.io/projected/60c05508-531e-4283-8b93-2619243c5f41-kube-api-access-rczqp\") on node \"crc\" DevicePath \"\"" Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.381261 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rfcf2" event={"ID":"edc46ca6-ff8f-4f31-981d-633b7a3766b1","Type":"ContainerStarted","Data":"d967ef5d28bc731b1e13fa7bdc7a37bfffed1cd80b8c09ad88ab5a99f49c6b74"} Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.381442 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rfcf2" event={"ID":"edc46ca6-ff8f-4f31-981d-633b7a3766b1","Type":"ContainerStarted","Data":"44933d2b2246c5c53272ffbecd1d527b73f0e26e9b688fc5b3cc80690d2b0b4d"} Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.384523 4851 generic.go:334] "Generic (PLEG): container finished" podID="60c05508-531e-4283-8b93-2619243c5f41" containerID="56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686" exitCode=0 Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.384614 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kcrgn" Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.384622 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kcrgn" event={"ID":"60c05508-531e-4283-8b93-2619243c5f41","Type":"ContainerDied","Data":"56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686"} Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.385146 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kcrgn" event={"ID":"60c05508-531e-4283-8b93-2619243c5f41","Type":"ContainerDied","Data":"8e7b2e7a158696403852c73a7c2c695e3cd231d7df3a178f85a4ffe9b6f70231"} Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.385194 4851 scope.go:117] "RemoveContainer" containerID="56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686" Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.405620 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rfcf2" podStartSLOduration=1.34267599 podStartE2EDuration="1.405591292s" podCreationTimestamp="2026-02-23 13:23:17 +0000 UTC" firstStartedPulling="2026-02-23 13:23:17.574281283 +0000 UTC m=+952.255984961" lastFinishedPulling="2026-02-23 13:23:17.637196575 +0000 UTC m=+952.318900263" observedRunningTime="2026-02-23 13:23:18.402097443 +0000 UTC m=+953.083801161" watchObservedRunningTime="2026-02-23 13:23:18.405591292 +0000 UTC m=+953.087294990" Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.422306 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kcrgn"] Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.427810 4851 scope.go:117] "RemoveContainer" containerID="56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686" Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.428378 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kcrgn"] Feb 23 13:23:18 crc kubenswrapper[4851]: E0223 13:23:18.428520 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686\": container with ID starting with 56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686 not found: ID does not exist" containerID="56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686" Feb 23 13:23:18 crc kubenswrapper[4851]: I0223 13:23:18.428578 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686"} err="failed to get container status \"56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686\": rpc error: code = NotFound desc = could not find container \"56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686\": container with ID starting with 56adcfc12445666c2d1dfbfcf7c9b1557cab796ffe1633b21f7ac527f27c7686 not found: ID does not exist" Feb 23 13:23:19 crc kubenswrapper[4851]: I0223 13:23:19.055274 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-8qsz9" Feb 23 13:23:19 crc kubenswrapper[4851]: I0223 13:23:19.638625 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hb2x8" Feb 23 13:23:19 crc kubenswrapper[4851]: I0223 13:23:19.728084 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-88gf5" Feb 23 13:23:19 crc kubenswrapper[4851]: I0223 13:23:19.975847 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c05508-531e-4283-8b93-2619243c5f41" path="/var/lib/kubelet/pods/60c05508-531e-4283-8b93-2619243c5f41/volumes" Feb 23 13:23:27 crc kubenswrapper[4851]: I0223 13:23:27.336373 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rfcf2" Feb 23 13:23:27 crc kubenswrapper[4851]: I0223 13:23:27.337796 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rfcf2" Feb 23 13:23:27 crc kubenswrapper[4851]: I0223 13:23:27.373876 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rfcf2" Feb 23 13:23:27 crc kubenswrapper[4851]: I0223 13:23:27.464475 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rfcf2" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.460427 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh"] Feb 23 13:23:28 crc kubenswrapper[4851]: E0223 13:23:28.460919 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c05508-531e-4283-8b93-2619243c5f41" containerName="registry-server" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.460934 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c05508-531e-4283-8b93-2619243c5f41" containerName="registry-server" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.461081 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c05508-531e-4283-8b93-2619243c5f41" containerName="registry-server" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.462051 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.464745 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vc2s8" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.468967 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh"] Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.567219 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw6j5\" (UniqueName: \"kubernetes.io/projected/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-kube-api-access-mw6j5\") pod \"8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.567322 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-bundle\") pod \"8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.567403 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-util\") pod \"8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.668933 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-util\") pod \"8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.669006 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw6j5\" (UniqueName: \"kubernetes.io/projected/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-kube-api-access-mw6j5\") pod \"8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.669054 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-bundle\") pod \"8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.669528 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-util\") pod \"8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.669551 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-bundle\") pod \"8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.687425 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw6j5\" (UniqueName: \"kubernetes.io/projected/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-kube-api-access-mw6j5\") pod \"8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:28 crc kubenswrapper[4851]: I0223 13:23:28.780426 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:29 crc kubenswrapper[4851]: I0223 13:23:29.169299 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh"] Feb 23 13:23:29 crc kubenswrapper[4851]: W0223 13:23:29.172318 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76eb9096_ceb3_4f9e_8dea_2fce146af5c0.slice/crio-83aa05f6f60b7a43d7d0b316053749969b9f003af908ba4a9fcc2d3f2835f422 WatchSource:0}: Error finding container 83aa05f6f60b7a43d7d0b316053749969b9f003af908ba4a9fcc2d3f2835f422: Status 404 returned error can't find the container with id 83aa05f6f60b7a43d7d0b316053749969b9f003af908ba4a9fcc2d3f2835f422 Feb 23 13:23:29 crc kubenswrapper[4851]: I0223 13:23:29.462284 4851 generic.go:334] "Generic (PLEG): container finished" podID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerID="cbc7653858a8d509e6c1a918425aad5ae1415f331d8c8a5f6f12a5430f71d63c" exitCode=0 Feb 23 13:23:29 crc kubenswrapper[4851]: I0223 13:23:29.462378 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" event={"ID":"76eb9096-ceb3-4f9e-8dea-2fce146af5c0","Type":"ContainerDied","Data":"cbc7653858a8d509e6c1a918425aad5ae1415f331d8c8a5f6f12a5430f71d63c"} Feb 23 13:23:29 crc kubenswrapper[4851]: I0223 13:23:29.462436 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" event={"ID":"76eb9096-ceb3-4f9e-8dea-2fce146af5c0","Type":"ContainerStarted","Data":"83aa05f6f60b7a43d7d0b316053749969b9f003af908ba4a9fcc2d3f2835f422"} Feb 23 13:23:30 crc kubenswrapper[4851]: I0223 13:23:30.470280 4851 generic.go:334] "Generic (PLEG): container finished" podID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerID="419e2ed5c19d14fc1f9581765aba36af9702c75144b3cedc69d25ada0cfe7005" exitCode=0 Feb 23 13:23:30 crc kubenswrapper[4851]: I0223 13:23:30.470371 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" event={"ID":"76eb9096-ceb3-4f9e-8dea-2fce146af5c0","Type":"ContainerDied","Data":"419e2ed5c19d14fc1f9581765aba36af9702c75144b3cedc69d25ada0cfe7005"} Feb 23 13:23:31 crc kubenswrapper[4851]: I0223 13:23:31.478283 4851 generic.go:334] "Generic (PLEG): container finished" podID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerID="3dc17d6d54ab63330761cd5ad80bcdcfc8b3d48dcf88057267e910200a9581d5" exitCode=0 Feb 23 13:23:31 crc kubenswrapper[4851]: I0223 13:23:31.478369 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" event={"ID":"76eb9096-ceb3-4f9e-8dea-2fce146af5c0","Type":"ContainerDied","Data":"3dc17d6d54ab63330761cd5ad80bcdcfc8b3d48dcf88057267e910200a9581d5"} Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.744419 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.822980 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-util\") pod \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.823099 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-bundle\") pod \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.823145 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw6j5\" (UniqueName: \"kubernetes.io/projected/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-kube-api-access-mw6j5\") pod \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\" (UID: \"76eb9096-ceb3-4f9e-8dea-2fce146af5c0\") " Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.823750 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-bundle" (OuterVolumeSpecName: "bundle") pod "76eb9096-ceb3-4f9e-8dea-2fce146af5c0" (UID: "76eb9096-ceb3-4f9e-8dea-2fce146af5c0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.828538 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-kube-api-access-mw6j5" (OuterVolumeSpecName: "kube-api-access-mw6j5") pod "76eb9096-ceb3-4f9e-8dea-2fce146af5c0" (UID: "76eb9096-ceb3-4f9e-8dea-2fce146af5c0"). InnerVolumeSpecName "kube-api-access-mw6j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.838043 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-util" (OuterVolumeSpecName: "util") pod "76eb9096-ceb3-4f9e-8dea-2fce146af5c0" (UID: "76eb9096-ceb3-4f9e-8dea-2fce146af5c0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.925067 4851 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.925099 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw6j5\" (UniqueName: \"kubernetes.io/projected/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-kube-api-access-mw6j5\") on node \"crc\" DevicePath \"\"" Feb 23 13:23:32 crc kubenswrapper[4851]: I0223 13:23:32.925113 4851 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76eb9096-ceb3-4f9e-8dea-2fce146af5c0-util\") on node \"crc\" DevicePath \"\"" Feb 23 13:23:33 crc kubenswrapper[4851]: I0223 13:23:33.496921 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" event={"ID":"76eb9096-ceb3-4f9e-8dea-2fce146af5c0","Type":"ContainerDied","Data":"83aa05f6f60b7a43d7d0b316053749969b9f003af908ba4a9fcc2d3f2835f422"} Feb 23 13:23:33 crc kubenswrapper[4851]: I0223 13:23:33.496989 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83aa05f6f60b7a43d7d0b316053749969b9f003af908ba4a9fcc2d3f2835f422" Feb 23 13:23:33 crc kubenswrapper[4851]: I0223 13:23:33.497305 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.363193 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf"] Feb 23 13:23:35 crc kubenswrapper[4851]: E0223 13:23:35.363734 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerName="util" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.363747 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerName="util" Feb 23 13:23:35 crc kubenswrapper[4851]: E0223 13:23:35.363760 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerName="extract" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.363766 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerName="extract" Feb 23 13:23:35 crc kubenswrapper[4851]: E0223 13:23:35.363773 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerName="pull" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.363779 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerName="pull" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.363887 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="76eb9096-ceb3-4f9e-8dea-2fce146af5c0" containerName="extract" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.364241 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.366759 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-5km62" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.392868 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf"] Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.456472 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwzx\" (UniqueName: \"kubernetes.io/projected/5723937f-de2b-455f-9015-e13595ee88e3-kube-api-access-gxwzx\") pod \"openstack-operator-controller-init-567cd64b9b-qlxlf\" (UID: \"5723937f-de2b-455f-9015-e13595ee88e3\") " pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.558264 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwzx\" (UniqueName: \"kubernetes.io/projected/5723937f-de2b-455f-9015-e13595ee88e3-kube-api-access-gxwzx\") pod \"openstack-operator-controller-init-567cd64b9b-qlxlf\" (UID: \"5723937f-de2b-455f-9015-e13595ee88e3\") " pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.581518 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwzx\" (UniqueName: \"kubernetes.io/projected/5723937f-de2b-455f-9015-e13595ee88e3-kube-api-access-gxwzx\") pod \"openstack-operator-controller-init-567cd64b9b-qlxlf\" (UID: \"5723937f-de2b-455f-9015-e13595ee88e3\") " pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" Feb 23 13:23:35 crc kubenswrapper[4851]: I0223 13:23:35.681174 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" Feb 23 13:23:36 crc kubenswrapper[4851]: I0223 13:23:36.092392 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf"] Feb 23 13:23:36 crc kubenswrapper[4851]: W0223 13:23:36.100663 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5723937f_de2b_455f_9015_e13595ee88e3.slice/crio-c4bb9790990b96c4e3c2e2f67d35bcb703564319b84a235e924971c20098f1d7 WatchSource:0}: Error finding container c4bb9790990b96c4e3c2e2f67d35bcb703564319b84a235e924971c20098f1d7: Status 404 returned error can't find the container with id c4bb9790990b96c4e3c2e2f67d35bcb703564319b84a235e924971c20098f1d7 Feb 23 13:23:36 crc kubenswrapper[4851]: I0223 13:23:36.515998 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" event={"ID":"5723937f-de2b-455f-9015-e13595ee88e3","Type":"ContainerStarted","Data":"c4bb9790990b96c4e3c2e2f67d35bcb703564319b84a235e924971c20098f1d7"} Feb 23 13:23:40 crc kubenswrapper[4851]: I0223 13:23:40.536100 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" event={"ID":"5723937f-de2b-455f-9015-e13595ee88e3","Type":"ContainerStarted","Data":"b75426607f177e8a34d0f5e1a2212fe235d5751afc2464580ec41d79f028fc9a"} Feb 23 13:23:40 crc kubenswrapper[4851]: I0223 13:23:40.536525 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" Feb 23 13:23:40 crc kubenswrapper[4851]: I0223 13:23:40.562135 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" podStartSLOduration=1.9017670899999999 podStartE2EDuration="5.562115135s" podCreationTimestamp="2026-02-23 13:23:35 +0000 UTC" firstStartedPulling="2026-02-23 13:23:36.102767836 +0000 UTC m=+970.784471514" lastFinishedPulling="2026-02-23 13:23:39.763115881 +0000 UTC m=+974.444819559" observedRunningTime="2026-02-23 13:23:40.558159713 +0000 UTC m=+975.239863401" watchObservedRunningTime="2026-02-23 13:23:40.562115135 +0000 UTC m=+975.243818813" Feb 23 13:23:45 crc kubenswrapper[4851]: I0223 13:23:45.685477 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-567cd64b9b-qlxlf" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.463615 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qs84x"] Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.465941 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.477834 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qs84x"] Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.561814 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-catalog-content\") pod \"community-operators-qs84x\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.561878 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-utilities\") pod \"community-operators-qs84x\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.562315 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46kc\" (UniqueName: \"kubernetes.io/projected/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-kube-api-access-f46kc\") pod \"community-operators-qs84x\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.663289 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46kc\" (UniqueName: \"kubernetes.io/projected/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-kube-api-access-f46kc\") pod \"community-operators-qs84x\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.663428 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-catalog-content\") pod \"community-operators-qs84x\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.663470 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-utilities\") pod \"community-operators-qs84x\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.664146 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-utilities\") pod \"community-operators-qs84x\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.664237 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-catalog-content\") pod \"community-operators-qs84x\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.711223 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46kc\" (UniqueName: \"kubernetes.io/projected/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-kube-api-access-f46kc\") pod \"community-operators-qs84x\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:50 crc kubenswrapper[4851]: I0223 13:23:50.784033 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:23:51 crc kubenswrapper[4851]: I0223 13:23:51.197519 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qs84x"] Feb 23 13:23:51 crc kubenswrapper[4851]: I0223 13:23:51.618902 4851 generic.go:334] "Generic (PLEG): container finished" podID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerID="f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4" exitCode=0 Feb 23 13:23:51 crc kubenswrapper[4851]: I0223 13:23:51.618951 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs84x" event={"ID":"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd","Type":"ContainerDied","Data":"f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4"} Feb 23 13:23:51 crc kubenswrapper[4851]: I0223 13:23:51.618981 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs84x" event={"ID":"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd","Type":"ContainerStarted","Data":"93d5c1112bec3a579583b1747483ec3c17a8fecc39713679d1bfa83b40d18e12"} Feb 23 13:23:52 crc kubenswrapper[4851]: I0223 13:23:52.625691 4851 generic.go:334] "Generic (PLEG): container finished" podID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerID="0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d" exitCode=0 Feb 23 13:23:52 crc kubenswrapper[4851]: I0223 13:23:52.625879 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs84x" event={"ID":"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd","Type":"ContainerDied","Data":"0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d"} Feb 23 13:23:53 crc kubenswrapper[4851]: I0223 13:23:53.633386 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs84x" event={"ID":"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd","Type":"ContainerStarted","Data":"4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715"} Feb 23 13:23:53 crc kubenswrapper[4851]: I0223 13:23:53.651202 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qs84x" podStartSLOduration=2.145382702 podStartE2EDuration="3.65118388s" podCreationTimestamp="2026-02-23 13:23:50 +0000 UTC" firstStartedPulling="2026-02-23 13:23:51.620842205 +0000 UTC m=+986.302545883" lastFinishedPulling="2026-02-23 13:23:53.126643383 +0000 UTC m=+987.808347061" observedRunningTime="2026-02-23 13:23:53.649926724 +0000 UTC m=+988.331630422" watchObservedRunningTime="2026-02-23 13:23:53.65118388 +0000 UTC m=+988.332887558" Feb 23 13:24:00 crc kubenswrapper[4851]: I0223 13:24:00.784403 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:24:00 crc kubenswrapper[4851]: I0223 13:24:00.785198 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:24:00 crc kubenswrapper[4851]: I0223 13:24:00.825000 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:24:01 crc kubenswrapper[4851]: I0223 13:24:01.710200 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:24:01 crc kubenswrapper[4851]: I0223 13:24:01.751583 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qs84x"] Feb 23 13:24:03 crc kubenswrapper[4851]: I0223 13:24:03.684243 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qs84x" podUID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerName="registry-server" containerID="cri-o://4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715" gracePeriod=2 Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.059200 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.152107 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-catalog-content\") pod \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.152205 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-utilities\") pod \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.152245 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f46kc\" (UniqueName: \"kubernetes.io/projected/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-kube-api-access-f46kc\") pod \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\" (UID: \"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd\") " Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.153025 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-utilities" (OuterVolumeSpecName: "utilities") pod "cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" (UID: "cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.158142 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-kube-api-access-f46kc" (OuterVolumeSpecName: "kube-api-access-f46kc") pod "cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" (UID: "cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd"). InnerVolumeSpecName "kube-api-access-f46kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.201605 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" (UID: "cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.254288 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.254345 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.254359 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f46kc\" (UniqueName: \"kubernetes.io/projected/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd-kube-api-access-f46kc\") on node \"crc\" DevicePath \"\"" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.486360 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r"] Feb 23 13:24:04 crc kubenswrapper[4851]: E0223 13:24:04.486666 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerName="extract-content" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.486681 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerName="extract-content" Feb 23 13:24:04 crc kubenswrapper[4851]: E0223 13:24:04.486701 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerName="registry-server" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.486707 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerName="registry-server" Feb 23 13:24:04 crc kubenswrapper[4851]: E0223 13:24:04.486726 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerName="extract-utilities" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.486734 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerName="extract-utilities" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.486876 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerName="registry-server" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.487406 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.489024 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rc96q" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.495553 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.496387 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.502515 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-l4kx9" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.504927 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.510625 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.515910 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.516961 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.518281 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-vvbd8" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.533026 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.534058 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.535447 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qq7hk" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.551094 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.552093 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.555070 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vfrrf" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.558123 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4v6f\" (UniqueName: \"kubernetes.io/projected/ca30fe6b-5b33-4e6e-acb5-93a49ae9257d-kube-api-access-g4v6f\") pod \"cinder-operator-controller-manager-55d77d7b5c-2xlr5\" (UID: \"ca30fe6b-5b33-4e6e-acb5-93a49ae9257d\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.558161 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxng7\" (UniqueName: \"kubernetes.io/projected/115cc313-eea6-40cd-9e8a-a7205e83cc07-kube-api-access-xxng7\") pod \"designate-operator-controller-manager-6d8bf5c495-k8pws\" (UID: \"115cc313-eea6-40cd-9e8a-a7205e83cc07\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.558187 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv4q9\" (UniqueName: \"kubernetes.io/projected/f17a63ea-4b87-429b-8c90-58790c572b9e-kube-api-access-zv4q9\") pod \"glance-operator-controller-manager-784b5bb6c5-rm79x\" (UID: \"f17a63ea-4b87-429b-8c90-58790c572b9e\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.558378 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4d6h\" (UniqueName: \"kubernetes.io/projected/c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44-kube-api-access-j4d6h\") pod \"barbican-operator-controller-manager-868647ff47-nhj9r\" (UID: \"c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.565645 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.579006 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.582773 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.601319 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-l4rb9" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.623945 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.642538 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-b827v"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.653096 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.656489 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.659236 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lprrf" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.659485 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.660842 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxng7\" (UniqueName: \"kubernetes.io/projected/115cc313-eea6-40cd-9e8a-a7205e83cc07-kube-api-access-xxng7\") pod \"designate-operator-controller-manager-6d8bf5c495-k8pws\" (UID: \"115cc313-eea6-40cd-9e8a-a7205e83cc07\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.660888 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv4q9\" (UniqueName: \"kubernetes.io/projected/f17a63ea-4b87-429b-8c90-58790c572b9e-kube-api-access-zv4q9\") pod \"glance-operator-controller-manager-784b5bb6c5-rm79x\" (UID: \"f17a63ea-4b87-429b-8c90-58790c572b9e\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.660940 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4d6h\" (UniqueName: \"kubernetes.io/projected/c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44-kube-api-access-j4d6h\") pod \"barbican-operator-controller-manager-868647ff47-nhj9r\" (UID: \"c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.660976 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5sld\" (UniqueName: \"kubernetes.io/projected/9abe19ef-7cfa-43dd-983c-bcef5a540100-kube-api-access-m5sld\") pod \"horizon-operator-controller-manager-5b9b8895d5-8wdqc\" (UID: \"9abe19ef-7cfa-43dd-983c-bcef5a540100\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.661005 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6vzb\" (UniqueName: \"kubernetes.io/projected/6fb817cf-5b9d-4879-a997-cd3f1d99db3c-kube-api-access-r6vzb\") pod \"heat-operator-controller-manager-69f49c598c-tvt8g\" (UID: \"6fb817cf-5b9d-4879-a997-cd3f1d99db3c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.661038 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4v6f\" (UniqueName: \"kubernetes.io/projected/ca30fe6b-5b33-4e6e-acb5-93a49ae9257d-kube-api-access-g4v6f\") pod \"cinder-operator-controller-manager-55d77d7b5c-2xlr5\" (UID: \"ca30fe6b-5b33-4e6e-acb5-93a49ae9257d\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.686714 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4d6h\" (UniqueName: \"kubernetes.io/projected/c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44-kube-api-access-j4d6h\") pod \"barbican-operator-controller-manager-868647ff47-nhj9r\" (UID: \"c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.688029 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv4q9\" (UniqueName: \"kubernetes.io/projected/f17a63ea-4b87-429b-8c90-58790c572b9e-kube-api-access-zv4q9\") pod \"glance-operator-controller-manager-784b5bb6c5-rm79x\" (UID: \"f17a63ea-4b87-429b-8c90-58790c572b9e\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.690673 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-b827v"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.699359 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxng7\" (UniqueName: \"kubernetes.io/projected/115cc313-eea6-40cd-9e8a-a7205e83cc07-kube-api-access-xxng7\") pod \"designate-operator-controller-manager-6d8bf5c495-k8pws\" (UID: \"115cc313-eea6-40cd-9e8a-a7205e83cc07\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.707270 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.707913 4851 generic.go:334] "Generic (PLEG): container finished" podID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" containerID="4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715" exitCode=0 Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.708032 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs84x" event={"ID":"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd","Type":"ContainerDied","Data":"4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715"} Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.708108 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qs84x" event={"ID":"cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd","Type":"ContainerDied","Data":"93d5c1112bec3a579583b1747483ec3c17a8fecc39713679d1bfa83b40d18e12"} Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.708176 4851 scope.go:117] "RemoveContainer" containerID="4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.708436 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qs84x" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.709406 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4v6f\" (UniqueName: \"kubernetes.io/projected/ca30fe6b-5b33-4e6e-acb5-93a49ae9257d-kube-api-access-g4v6f\") pod \"cinder-operator-controller-manager-55d77d7b5c-2xlr5\" (UID: \"ca30fe6b-5b33-4e6e-acb5-93a49ae9257d\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.716964 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.717758 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.725784 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-c2dxs" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.744462 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.746137 4851 scope.go:117] "RemoveContainer" containerID="0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.765419 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.766498 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.766808 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.766912 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5sld\" (UniqueName: \"kubernetes.io/projected/9abe19ef-7cfa-43dd-983c-bcef5a540100-kube-api-access-m5sld\") pod \"horizon-operator-controller-manager-5b9b8895d5-8wdqc\" (UID: \"9abe19ef-7cfa-43dd-983c-bcef5a540100\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.766951 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6vzb\" (UniqueName: \"kubernetes.io/projected/6fb817cf-5b9d-4879-a997-cd3f1d99db3c-kube-api-access-r6vzb\") pod \"heat-operator-controller-manager-69f49c598c-tvt8g\" (UID: \"6fb817cf-5b9d-4879-a997-cd3f1d99db3c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.766996 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwm2\" (UniqueName: \"kubernetes.io/projected/834a522f-ca03-403d-8402-679845f7c6c3-kube-api-access-jhwm2\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.767027 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxkcv\" (UniqueName: \"kubernetes.io/projected/ef730879-0a7d-4e4a-925e-8ef30c366d64-kube-api-access-wxkcv\") pod \"ironic-operator-controller-manager-554564d7fc-bctck\" (UID: \"ef730879-0a7d-4e4a-925e-8ef30c366d64\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.769698 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-nnm7s" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.770291 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.782255 4851 scope.go:117] "RemoveContainer" containerID="f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.786640 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-sd26k"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.787492 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.794446 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-vzf8b" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.800025 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6vzb\" (UniqueName: \"kubernetes.io/projected/6fb817cf-5b9d-4879-a997-cd3f1d99db3c-kube-api-access-r6vzb\") pod \"heat-operator-controller-manager-69f49c598c-tvt8g\" (UID: \"6fb817cf-5b9d-4879-a997-cd3f1d99db3c\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.804136 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.805027 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.806022 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5sld\" (UniqueName: \"kubernetes.io/projected/9abe19ef-7cfa-43dd-983c-bcef5a540100-kube-api-access-m5sld\") pod \"horizon-operator-controller-manager-5b9b8895d5-8wdqc\" (UID: \"9abe19ef-7cfa-43dd-983c-bcef5a540100\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.806578 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-svpmm" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.815718 4851 scope.go:117] "RemoveContainer" containerID="4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715" Feb 23 13:24:04 crc kubenswrapper[4851]: E0223 13:24:04.816245 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715\": container with ID starting with 4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715 not found: ID does not exist" containerID="4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.816353 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715"} err="failed to get container status \"4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715\": rpc error: code = NotFound desc = could not find container \"4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715\": container with ID starting with 4b49e7fed22a6a13ea5cdcb3bb003b6eb60085b4d75b836d8cf4c98d341ee715 not found: ID does not exist" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.816455 4851 scope.go:117] "RemoveContainer" containerID="0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d" Feb 23 13:24:04 crc kubenswrapper[4851]: E0223 13:24:04.817681 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d\": container with ID starting with 0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d not found: ID does not exist" containerID="0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.817783 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d"} err="failed to get container status \"0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d\": rpc error: code = NotFound desc = could not find container \"0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d\": container with ID starting with 0e6dd8f473d66b252e8e24a5f7c0359ea447ac4069f5ea7aef94e3426c5a531d not found: ID does not exist" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.817850 4851 scope.go:117] "RemoveContainer" containerID="f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4" Feb 23 13:24:04 crc kubenswrapper[4851]: E0223 13:24:04.818952 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4\": container with ID starting with f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4 not found: ID does not exist" containerID="f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.818987 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4"} err="failed to get container status \"f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4\": rpc error: code = NotFound desc = could not find container \"f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4\": container with ID starting with f34b4409a7d5117dda7b37dfce12c3d434e05d82646b6992e8ac3193f2e248a4 not found: ID does not exist" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.833976 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-sd26k"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.841323 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.845397 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.846027 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.846213 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.847594 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dznds" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.849982 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-65dws"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.851047 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.855125 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ng4vp" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.857646 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.859920 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.862541 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-khq2k" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.865198 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.881216 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.882071 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.884433 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwm2\" (UniqueName: \"kubernetes.io/projected/834a522f-ca03-403d-8402-679845f7c6c3-kube-api-access-jhwm2\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.884623 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxkcv\" (UniqueName: \"kubernetes.io/projected/ef730879-0a7d-4e4a-925e-8ef30c366d64-kube-api-access-wxkcv\") pod \"ironic-operator-controller-manager-554564d7fc-bctck\" (UID: \"ef730879-0a7d-4e4a-925e-8ef30c366d64\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.884812 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.885276 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dz5n\" (UniqueName: \"kubernetes.io/projected/0dbb5228-ae4a-427d-97a2-3768b460e134-kube-api-access-2dz5n\") pod \"manila-operator-controller-manager-67d996989d-sd26k\" (UID: \"0dbb5228-ae4a-427d-97a2-3768b460e134\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.885460 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr97c\" (UniqueName: \"kubernetes.io/projected/40f2272b-7e63-4666-b858-9722a0af16c8-kube-api-access-kr97c\") pod \"mariadb-operator-controller-manager-6994f66f48-vrjqg\" (UID: \"40f2272b-7e63-4666-b858-9722a0af16c8\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.888089 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.888284 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6r4ds" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.888821 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" Feb 23 13:24:04 crc kubenswrapper[4851]: E0223 13:24:04.889486 4851 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:04 crc kubenswrapper[4851]: E0223 13:24:04.889532 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert podName:834a522f-ca03-403d-8402-679845f7c6c3 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:05.389517112 +0000 UTC m=+1000.071220790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert") pod "infra-operator-controller-manager-79d975b745-b827v" (UID: "834a522f-ca03-403d-8402-679845f7c6c3") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.890999 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjvs\" (UniqueName: \"kubernetes.io/projected/71fd5f4f-a9fc-4242-813a-3fb7d5827c41-kube-api-access-gsjvs\") pod \"keystone-operator-controller-manager-b4d948c87-pd5bf\" (UID: \"71fd5f4f-a9fc-4242-813a-3fb7d5827c41\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.908605 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-65dws"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.909677 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.911166 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.913491 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwm2\" (UniqueName: \"kubernetes.io/projected/834a522f-ca03-403d-8402-679845f7c6c3-kube-api-access-jhwm2\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.913960 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxkcv\" (UniqueName: \"kubernetes.io/projected/ef730879-0a7d-4e4a-925e-8ef30c366d64-kube-api-access-wxkcv\") pod \"ironic-operator-controller-manager-554564d7fc-bctck\" (UID: \"ef730879-0a7d-4e4a-925e-8ef30c366d64\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.925083 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.926053 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.926269 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.927846 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-j84fr" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.933309 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.939593 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.944818 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.947538 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.948250 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.948374 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.952062 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nwdjv" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.958167 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.959157 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.961307 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-8xwmk" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.962629 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.966568 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qs84x"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.984108 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw"] Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.991707 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdl7\" (UniqueName: \"kubernetes.io/projected/84a8d9f7-24b2-4f08-a917-b614dc537ffe-kube-api-access-6rdl7\") pod \"neutron-operator-controller-manager-6bd4687957-rbgkf\" (UID: \"84a8d9f7-24b2-4f08-a917-b614dc537ffe\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.991747 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dz5n\" (UniqueName: \"kubernetes.io/projected/0dbb5228-ae4a-427d-97a2-3768b460e134-kube-api-access-2dz5n\") pod \"manila-operator-controller-manager-67d996989d-sd26k\" (UID: \"0dbb5228-ae4a-427d-97a2-3768b460e134\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.991777 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr97c\" (UniqueName: \"kubernetes.io/projected/40f2272b-7e63-4666-b858-9722a0af16c8-kube-api-access-kr97c\") pod \"mariadb-operator-controller-manager-6994f66f48-vrjqg\" (UID: \"40f2272b-7e63-4666-b858-9722a0af16c8\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.991799 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68ms\" (UniqueName: \"kubernetes.io/projected/e289a048-8c1a-4349-8b3b-8f3628e23bdc-kube-api-access-f68ms\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.991821 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjvs\" (UniqueName: \"kubernetes.io/projected/71fd5f4f-a9fc-4242-813a-3fb7d5827c41-kube-api-access-gsjvs\") pod \"keystone-operator-controller-manager-b4d948c87-pd5bf\" (UID: \"71fd5f4f-a9fc-4242-813a-3fb7d5827c41\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.991841 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km5ng\" (UniqueName: \"kubernetes.io/projected/fbc0edce-88b5-4ddc-8495-01e33e7a7753-kube-api-access-km5ng\") pod \"octavia-operator-controller-manager-659dc6bbfc-x2gtd\" (UID: \"fbc0edce-88b5-4ddc-8495-01e33e7a7753\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.991867 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.991893 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4x9g\" (UniqueName: \"kubernetes.io/projected/33df439b-30ca-4397-a992-be2de607477a-kube-api-access-g4x9g\") pod \"nova-operator-controller-manager-567668f5cf-65dws\" (UID: \"33df439b-30ca-4397-a992-be2de607477a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" Feb 23 13:24:04 crc kubenswrapper[4851]: I0223 13:24:04.991918 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gsj6\" (UniqueName: \"kubernetes.io/projected/ddcb4697-a6af-4baa-bd78-ae1f3b47c6af-kube-api-access-5gsj6\") pod \"ovn-operator-controller-manager-5955d8c787-x9kh8\" (UID: \"ddcb4697-a6af-4baa-bd78-ae1f3b47c6af\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.010367 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qs84x"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.043562 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjvs\" (UniqueName: \"kubernetes.io/projected/71fd5f4f-a9fc-4242-813a-3fb7d5827c41-kube-api-access-gsjvs\") pod \"keystone-operator-controller-manager-b4d948c87-pd5bf\" (UID: \"71fd5f4f-a9fc-4242-813a-3fb7d5827c41\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.047703 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dz5n\" (UniqueName: \"kubernetes.io/projected/0dbb5228-ae4a-427d-97a2-3768b460e134-kube-api-access-2dz5n\") pod \"manila-operator-controller-manager-67d996989d-sd26k\" (UID: \"0dbb5228-ae4a-427d-97a2-3768b460e134\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.048588 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr97c\" (UniqueName: \"kubernetes.io/projected/40f2272b-7e63-4666-b858-9722a0af16c8-kube-api-access-kr97c\") pod \"mariadb-operator-controller-manager-6994f66f48-vrjqg\" (UID: \"40f2272b-7e63-4666-b858-9722a0af16c8\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.098466 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.098530 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4x9g\" (UniqueName: \"kubernetes.io/projected/33df439b-30ca-4397-a992-be2de607477a-kube-api-access-g4x9g\") pod \"nova-operator-controller-manager-567668f5cf-65dws\" (UID: \"33df439b-30ca-4397-a992-be2de607477a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.098581 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gsj6\" (UniqueName: \"kubernetes.io/projected/ddcb4697-a6af-4baa-bd78-ae1f3b47c6af-kube-api-access-5gsj6\") pod \"ovn-operator-controller-manager-5955d8c787-x9kh8\" (UID: \"ddcb4697-a6af-4baa-bd78-ae1f3b47c6af\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.098609 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfv6f\" (UniqueName: \"kubernetes.io/projected/f9f540e9-5c10-4e33-b283-328276817914-kube-api-access-zfv6f\") pod \"placement-operator-controller-manager-8497b45c89-hktpx\" (UID: \"f9f540e9-5c10-4e33-b283-328276817914\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.098726 4851 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.098807 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert podName:e289a048-8c1a-4349-8b3b-8f3628e23bdc nodeName:}" failed. No retries permitted until 2026-02-23 13:24:05.598784985 +0000 UTC m=+1000.280488663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" (UID: "e289a048-8c1a-4349-8b3b-8f3628e23bdc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.105494 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.109184 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdl7\" (UniqueName: \"kubernetes.io/projected/84a8d9f7-24b2-4f08-a917-b614dc537ffe-kube-api-access-6rdl7\") pod \"neutron-operator-controller-manager-6bd4687957-rbgkf\" (UID: \"84a8d9f7-24b2-4f08-a917-b614dc537ffe\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.109933 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f68ms\" (UniqueName: \"kubernetes.io/projected/e289a048-8c1a-4349-8b3b-8f3628e23bdc-kube-api-access-f68ms\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.110153 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnsbz\" (UniqueName: \"kubernetes.io/projected/946a66f3-be29-4e8b-a800-637ef24a5694-kube-api-access-rnsbz\") pod \"swift-operator-controller-manager-68f46476f-bwbpw\" (UID: \"946a66f3-be29-4e8b-a800-637ef24a5694\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.110460 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km5ng\" (UniqueName: \"kubernetes.io/projected/fbc0edce-88b5-4ddc-8495-01e33e7a7753-kube-api-access-km5ng\") pod \"octavia-operator-controller-manager-659dc6bbfc-x2gtd\" (UID: \"fbc0edce-88b5-4ddc-8495-01e33e7a7753\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.113663 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.122259 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gsj6\" (UniqueName: \"kubernetes.io/projected/ddcb4697-a6af-4baa-bd78-ae1f3b47c6af-kube-api-access-5gsj6\") pod \"ovn-operator-controller-manager-5955d8c787-x9kh8\" (UID: \"ddcb4697-a6af-4baa-bd78-ae1f3b47c6af\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.128815 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4x9g\" (UniqueName: \"kubernetes.io/projected/33df439b-30ca-4397-a992-be2de607477a-kube-api-access-g4x9g\") pod \"nova-operator-controller-manager-567668f5cf-65dws\" (UID: \"33df439b-30ca-4397-a992-be2de607477a\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.136427 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.137536 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.137656 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdl7\" (UniqueName: \"kubernetes.io/projected/84a8d9f7-24b2-4f08-a917-b614dc537ffe-kube-api-access-6rdl7\") pod \"neutron-operator-controller-manager-6bd4687957-rbgkf\" (UID: \"84a8d9f7-24b2-4f08-a917-b614dc537ffe\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.141061 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ps9rb" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.141123 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f68ms\" (UniqueName: \"kubernetes.io/projected/e289a048-8c1a-4349-8b3b-8f3628e23bdc-kube-api-access-f68ms\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.141149 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km5ng\" (UniqueName: \"kubernetes.io/projected/fbc0edce-88b5-4ddc-8495-01e33e7a7753-kube-api-access-km5ng\") pod \"octavia-operator-controller-manager-659dc6bbfc-x2gtd\" (UID: \"fbc0edce-88b5-4ddc-8495-01e33e7a7753\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.157911 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.180395 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.216013 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrq7h\" (UniqueName: \"kubernetes.io/projected/2a0bac92-ab56-4f67-a3a8-09ea4de25ae5-kube-api-access-mrq7h\") pod \"telemetry-operator-controller-manager-589c568786-khm2l\" (UID: \"2a0bac92-ab56-4f67-a3a8-09ea4de25ae5\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.216055 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnsbz\" (UniqueName: \"kubernetes.io/projected/946a66f3-be29-4e8b-a800-637ef24a5694-kube-api-access-rnsbz\") pod \"swift-operator-controller-manager-68f46476f-bwbpw\" (UID: \"946a66f3-be29-4e8b-a800-637ef24a5694\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.216137 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfv6f\" (UniqueName: \"kubernetes.io/projected/f9f540e9-5c10-4e33-b283-328276817914-kube-api-access-zfv6f\") pod \"placement-operator-controller-manager-8497b45c89-hktpx\" (UID: \"f9f540e9-5c10-4e33-b283-328276817914\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.217271 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.241787 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfv6f\" (UniqueName: \"kubernetes.io/projected/f9f540e9-5c10-4e33-b283-328276817914-kube-api-access-zfv6f\") pod \"placement-operator-controller-manager-8497b45c89-hktpx\" (UID: \"f9f540e9-5c10-4e33-b283-328276817914\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.243754 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnsbz\" (UniqueName: \"kubernetes.io/projected/946a66f3-be29-4e8b-a800-637ef24a5694-kube-api-access-rnsbz\") pod \"swift-operator-controller-manager-68f46476f-bwbpw\" (UID: \"946a66f3-be29-4e8b-a800-637ef24a5694\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.272724 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.273772 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.284168 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-lmbsg" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.284657 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.291605 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.295737 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.296540 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.303936 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mdrvm" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.309750 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.316569 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.317403 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrq7h\" (UniqueName: \"kubernetes.io/projected/2a0bac92-ab56-4f67-a3a8-09ea4de25ae5-kube-api-access-mrq7h\") pod \"telemetry-operator-controller-manager-589c568786-khm2l\" (UID: \"2a0bac92-ab56-4f67-a3a8-09ea4de25ae5\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.317643 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.320802 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.320955 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.321639 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-42hjd" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.324851 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.327211 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.331853 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.337881 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.338006 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrq7h\" (UniqueName: \"kubernetes.io/projected/2a0bac92-ab56-4f67-a3a8-09ea4de25ae5-kube-api-access-mrq7h\") pod \"telemetry-operator-controller-manager-589c568786-khm2l\" (UID: \"2a0bac92-ab56-4f67-a3a8-09ea4de25ae5\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.340898 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-r8sw4" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.341848 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.354961 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.387661 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.397864 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.419771 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.419830 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxhx9\" (UniqueName: \"kubernetes.io/projected/18ea2332-4904-4213-9ba2-c678a2125b37-kube-api-access-vxhx9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sv2f9\" (UID: \"18ea2332-4904-4213-9ba2-c678a2125b37\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.419884 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhnwp\" (UniqueName: \"kubernetes.io/projected/6769f01c-bcc7-4e3e-a791-0fa315f82b37-kube-api-access-bhnwp\") pod \"test-operator-controller-manager-5dc6794d5b-zktk2\" (UID: \"6769f01c-bcc7-4e3e-a791-0fa315f82b37\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.419925 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.419952 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw7jk\" (UniqueName: \"kubernetes.io/projected/7a7fd548-a78f-4096-b68a-2bc28b937e96-kube-api-access-xw7jk\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.419974 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.419989 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq5xz\" (UniqueName: \"kubernetes.io/projected/ea9b35cb-5758-42d8-8877-ceb1e19eb751-kube-api-access-wq5xz\") pod \"watcher-operator-controller-manager-bccc79885-gdn69\" (UID: \"ea9b35cb-5758-42d8-8877-ceb1e19eb751\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.420149 4851 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.420194 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert podName:834a522f-ca03-403d-8402-679845f7c6c3 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:06.420180422 +0000 UTC m=+1001.101884100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert") pod "infra-operator-controller-manager-79d975b745-b827v" (UID: "834a522f-ca03-403d-8402-679845f7c6c3") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.430429 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.448955 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.467530 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.522610 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.522659 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw7jk\" (UniqueName: \"kubernetes.io/projected/7a7fd548-a78f-4096-b68a-2bc28b937e96-kube-api-access-xw7jk\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.522680 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq5xz\" (UniqueName: \"kubernetes.io/projected/ea9b35cb-5758-42d8-8877-ceb1e19eb751-kube-api-access-wq5xz\") pod \"watcher-operator-controller-manager-bccc79885-gdn69\" (UID: \"ea9b35cb-5758-42d8-8877-ceb1e19eb751\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.522696 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.522748 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxhx9\" (UniqueName: \"kubernetes.io/projected/18ea2332-4904-4213-9ba2-c678a2125b37-kube-api-access-vxhx9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sv2f9\" (UID: \"18ea2332-4904-4213-9ba2-c678a2125b37\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.522795 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhnwp\" (UniqueName: \"kubernetes.io/projected/6769f01c-bcc7-4e3e-a791-0fa315f82b37-kube-api-access-bhnwp\") pod \"test-operator-controller-manager-5dc6794d5b-zktk2\" (UID: \"6769f01c-bcc7-4e3e-a791-0fa315f82b37\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.522967 4851 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.522994 4851 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.523050 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:06.023028363 +0000 UTC m=+1000.704732101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "metrics-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.523117 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:06.023074734 +0000 UTC m=+1000.704778412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "webhook-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.536034 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.550083 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxhx9\" (UniqueName: \"kubernetes.io/projected/18ea2332-4904-4213-9ba2-c678a2125b37-kube-api-access-vxhx9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-sv2f9\" (UID: \"18ea2332-4904-4213-9ba2-c678a2125b37\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.552087 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.553380 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw7jk\" (UniqueName: \"kubernetes.io/projected/7a7fd548-a78f-4096-b68a-2bc28b937e96-kube-api-access-xw7jk\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.556571 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhnwp\" (UniqueName: \"kubernetes.io/projected/6769f01c-bcc7-4e3e-a791-0fa315f82b37-kube-api-access-bhnwp\") pod \"test-operator-controller-manager-5dc6794d5b-zktk2\" (UID: \"6769f01c-bcc7-4e3e-a791-0fa315f82b37\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.561721 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq5xz\" (UniqueName: \"kubernetes.io/projected/ea9b35cb-5758-42d8-8877-ceb1e19eb751-kube-api-access-wq5xz\") pod \"watcher-operator-controller-manager-bccc79885-gdn69\" (UID: \"ea9b35cb-5758-42d8-8877-ceb1e19eb751\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" Feb 23 13:24:05 crc kubenswrapper[4851]: W0223 13:24:05.589042 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod115cc313_eea6_40cd_9e8a_a7205e83cc07.slice/crio-820a3bfaf5f010a5048f1e13fe45fffe995e2b4a2fc964ad0182987fc2c595c1 WatchSource:0}: Error finding container 820a3bfaf5f010a5048f1e13fe45fffe995e2b4a2fc964ad0182987fc2c595c1: Status 404 returned error can't find the container with id 820a3bfaf5f010a5048f1e13fe45fffe995e2b4a2fc964ad0182987fc2c595c1 Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.610734 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.624384 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.624586 4851 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: E0223 13:24:05.624679 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert podName:e289a048-8c1a-4349-8b3b-8f3628e23bdc nodeName:}" failed. No retries permitted until 2026-02-23 13:24:06.624654559 +0000 UTC m=+1001.306358237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" (UID: "e289a048-8c1a-4349-8b3b-8f3628e23bdc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.625554 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.684766 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.717856 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" event={"ID":"115cc313-eea6-40cd-9e8a-a7205e83cc07","Type":"ContainerStarted","Data":"820a3bfaf5f010a5048f1e13fe45fffe995e2b4a2fc964ad0182987fc2c595c1"} Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.731078 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" event={"ID":"ca30fe6b-5b33-4e6e-acb5-93a49ae9257d","Type":"ContainerStarted","Data":"2c0fc960b8c1fb862349e84ff32e115a1753ead116cb478d07ed0b94044e06c0"} Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.738608 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" event={"ID":"c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44","Type":"ContainerStarted","Data":"54697ea3be04e6b24c1d09c98428864029e39886856e8ac580463ba208e05411"} Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.870079 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.885395 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc"] Feb 23 13:24:05 crc kubenswrapper[4851]: I0223 13:24:05.904724 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x"] Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.032748 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd" path="/var/lib/kubelet/pods/cd9e9bb6-6fd5-4d11-912e-2a8e09a0eacd/volumes" Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.034504 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.034602 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.034802 4851 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.034887 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:07.034851389 +0000 UTC m=+1001.716555057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "webhook-server-cert" not found Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.034937 4851 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.034978 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:07.034971483 +0000 UTC m=+1001.716675161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "metrics-server-cert" not found Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.035267 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck"] Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.035300 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g"] Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.035311 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg"] Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.035320 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-sd26k"] Feb 23 13:24:06 crc kubenswrapper[4851]: W0223 13:24:06.035623 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fb817cf_5b9d_4879_a997_cd3f1d99db3c.slice/crio-b5805abd52e049db1ca04a709b1fac2a6900a4fb22f86077b50ec28fd53c8819 WatchSource:0}: Error finding container b5805abd52e049db1ca04a709b1fac2a6900a4fb22f86077b50ec28fd53c8819: Status 404 returned error can't find the container with id b5805abd52e049db1ca04a709b1fac2a6900a4fb22f86077b50ec28fd53c8819 Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.180275 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd"] Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.203576 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf"] Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.225840 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-65dws"] Feb 23 13:24:06 crc kubenswrapper[4851]: W0223 13:24:06.243118 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33df439b_30ca_4397_a992_be2de607477a.slice/crio-34263890cd7a9eac951b2fc1d0bb313dc3e2caba222589d46bc6f0bebf4f725d WatchSource:0}: Error finding container 34263890cd7a9eac951b2fc1d0bb313dc3e2caba222589d46bc6f0bebf4f725d: Status 404 returned error can't find the container with id 34263890cd7a9eac951b2fc1d0bb313dc3e2caba222589d46bc6f0bebf4f725d Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.307250 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l"] Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.314745 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8"] Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.323495 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5gsj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5955d8c787-x9kh8_openstack-operators(ddcb4697-a6af-4baa-bd78-ae1f3b47c6af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.324829 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" podUID="ddcb4697-a6af-4baa-bd78-ae1f3b47c6af" Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.347475 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw"] Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.362313 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx"] Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.362367 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zfv6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-hktpx_openstack-operators(f9f540e9-5c10-4e33-b283-328276817914): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.362441 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnsbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-bwbpw_openstack-operators(946a66f3-be29-4e8b-a800-637ef24a5694): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.363605 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" podUID="946a66f3-be29-4e8b-a800-637ef24a5694" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.363662 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" podUID="f9f540e9-5c10-4e33-b283-328276817914" Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.398548 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9"] Feb 23 13:24:06 crc kubenswrapper[4851]: W0223 13:24:06.406829 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18ea2332_4904_4213_9ba2_c678a2125b37.slice/crio-c3976656781d3b06d269559a6edf9489b23e177f68893aa5a35021c93ae081d8 WatchSource:0}: Error finding container c3976656781d3b06d269559a6edf9489b23e177f68893aa5a35021c93ae081d8: Status 404 returned error can't find the container with id c3976656781d3b06d269559a6edf9489b23e177f68893aa5a35021c93ae081d8 Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.409915 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2"] Feb 23 13:24:06 crc kubenswrapper[4851]: W0223 13:24:06.411151 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6769f01c_bcc7_4e3e_a791_0fa315f82b37.slice/crio-003a99bb44c81940f3b88eb5ba1adda7d866ca11bcfbe8ab346965a08c2ce658 WatchSource:0}: Error finding container 003a99bb44c81940f3b88eb5ba1adda7d866ca11bcfbe8ab346965a08c2ce658: Status 404 returned error can't find the container with id 003a99bb44c81940f3b88eb5ba1adda7d866ca11bcfbe8ab346965a08c2ce658 Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.412307 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxhx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-sv2f9_openstack-operators(18ea2332-4904-4213-9ba2-c678a2125b37): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.415444 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" podUID="18ea2332-4904-4213-9ba2-c678a2125b37" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.416218 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bhnwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-zktk2_openstack-operators(6769f01c-bcc7-4e3e-a791-0fa315f82b37): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.417505 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" podUID="6769f01c-bcc7-4e3e-a791-0fa315f82b37" Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.443843 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.444057 4851 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.444157 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert podName:834a522f-ca03-403d-8402-679845f7c6c3 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:08.444131743 +0000 UTC m=+1003.125835421 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert") pod "infra-operator-controller-manager-79d975b745-b827v" (UID: "834a522f-ca03-403d-8402-679845f7c6c3") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.477032 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69"] Feb 23 13:24:06 crc kubenswrapper[4851]: W0223 13:24:06.486529 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9b35cb_5758_42d8_8877_ceb1e19eb751.slice/crio-83fc80d26f13ca23f2e81c5b89075fb92eac11945b884a24faa70a7193f2b026 WatchSource:0}: Error finding container 83fc80d26f13ca23f2e81c5b89075fb92eac11945b884a24faa70a7193f2b026: Status 404 returned error can't find the container with id 83fc80d26f13ca23f2e81c5b89075fb92eac11945b884a24faa70a7193f2b026 Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.647136 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.647423 4851 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.647573 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert podName:e289a048-8c1a-4349-8b3b-8f3628e23bdc nodeName:}" failed. No retries permitted until 2026-02-23 13:24:08.647538501 +0000 UTC m=+1003.329242179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" (UID: "e289a048-8c1a-4349-8b3b-8f3628e23bdc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.753364 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" event={"ID":"84a8d9f7-24b2-4f08-a917-b614dc537ffe","Type":"ContainerStarted","Data":"735017567a2494572d2b431a1bfb057c3dc7bf71e4c085a19ea230c5b6f82f08"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.756695 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" event={"ID":"18ea2332-4904-4213-9ba2-c678a2125b37","Type":"ContainerStarted","Data":"c3976656781d3b06d269559a6edf9489b23e177f68893aa5a35021c93ae081d8"} Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.758076 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" podUID="18ea2332-4904-4213-9ba2-c678a2125b37" Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.760713 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" event={"ID":"6769f01c-bcc7-4e3e-a791-0fa315f82b37","Type":"ContainerStarted","Data":"003a99bb44c81940f3b88eb5ba1adda7d866ca11bcfbe8ab346965a08c2ce658"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.768357 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" event={"ID":"f17a63ea-4b87-429b-8c90-58790c572b9e","Type":"ContainerStarted","Data":"04fe87aa4948376474484f92d7023ac04db7009fb31af04d106e809cad88fff4"} Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.768543 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" podUID="6769f01c-bcc7-4e3e-a791-0fa315f82b37" Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.770476 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" event={"ID":"ef730879-0a7d-4e4a-925e-8ef30c366d64","Type":"ContainerStarted","Data":"0bc47dbe3379b34f9b71ce5163b9be5bbbdfc751389059debed6bcec62bc83a2"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.776210 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" event={"ID":"fbc0edce-88b5-4ddc-8495-01e33e7a7753","Type":"ContainerStarted","Data":"0b6d0122bf1ae06b0cfe19aee6c6e46469809e70c7000b5f82d674fef0e39584"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.778743 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" event={"ID":"2a0bac92-ab56-4f67-a3a8-09ea4de25ae5","Type":"ContainerStarted","Data":"6cc7cb239f305ba26cd4e44123a24089f2b75ce4e09f529ef2ae42f9d8a233f9"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.783794 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" event={"ID":"ea9b35cb-5758-42d8-8877-ceb1e19eb751","Type":"ContainerStarted","Data":"83fc80d26f13ca23f2e81c5b89075fb92eac11945b884a24faa70a7193f2b026"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.787418 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" event={"ID":"40f2272b-7e63-4666-b858-9722a0af16c8","Type":"ContainerStarted","Data":"1ea2825efe90205807814abf8408eb3b48a929fcec2b5c214999c6721d7e32e4"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.789180 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" event={"ID":"71fd5f4f-a9fc-4242-813a-3fb7d5827c41","Type":"ContainerStarted","Data":"7878bf615ef1c720d478c00625b991c938bfb65d18bda1489f226dcc1397c3d3"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.793199 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" event={"ID":"f9f540e9-5c10-4e33-b283-328276817914","Type":"ContainerStarted","Data":"6275c6d2b00027b5fc42d345b46d86c54ddfa030b3e07b780c3de8b0e84dd1a0"} Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.795712 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" podUID="f9f540e9-5c10-4e33-b283-328276817914" Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.799317 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" event={"ID":"6fb817cf-5b9d-4879-a997-cd3f1d99db3c","Type":"ContainerStarted","Data":"b5805abd52e049db1ca04a709b1fac2a6900a4fb22f86077b50ec28fd53c8819"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.801457 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" event={"ID":"0dbb5228-ae4a-427d-97a2-3768b460e134","Type":"ContainerStarted","Data":"c41b831a014be1ace0650b5c8da15c5774e6f3b2c074cfadfba8930c8ceb78ae"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.802994 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" event={"ID":"ddcb4697-a6af-4baa-bd78-ae1f3b47c6af","Type":"ContainerStarted","Data":"8d34b17dded1baa866ea52370f5c6da54aef8be616e90dbd2e514d144bbd98b2"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.805450 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" event={"ID":"9abe19ef-7cfa-43dd-983c-bcef5a540100","Type":"ContainerStarted","Data":"f057487bc24984322cfc2100366f4a3cfd8db22fa79c3fa05b7733f553eac85e"} Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.805640 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" podUID="ddcb4697-a6af-4baa-bd78-ae1f3b47c6af" Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.820099 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" event={"ID":"33df439b-30ca-4397-a992-be2de607477a","Type":"ContainerStarted","Data":"34263890cd7a9eac951b2fc1d0bb313dc3e2caba222589d46bc6f0bebf4f725d"} Feb 23 13:24:06 crc kubenswrapper[4851]: I0223 13:24:06.822797 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" event={"ID":"946a66f3-be29-4e8b-a800-637ef24a5694","Type":"ContainerStarted","Data":"d341efe355e4196683f981c1d49f14864300d6478af7c498732e416ca583593a"} Feb 23 13:24:06 crc kubenswrapper[4851]: E0223 13:24:06.824731 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" podUID="946a66f3-be29-4e8b-a800-637ef24a5694" Feb 23 13:24:07 crc kubenswrapper[4851]: I0223 13:24:07.061199 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:07 crc kubenswrapper[4851]: I0223 13:24:07.061263 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:07 crc kubenswrapper[4851]: E0223 13:24:07.062274 4851 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:24:07 crc kubenswrapper[4851]: E0223 13:24:07.063046 4851 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:24:07 crc kubenswrapper[4851]: E0223 13:24:07.063104 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:09.062338681 +0000 UTC m=+1003.744042349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "metrics-server-cert" not found Feb 23 13:24:07 crc kubenswrapper[4851]: E0223 13:24:07.063166 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:09.063121933 +0000 UTC m=+1003.744825661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "webhook-server-cert" not found Feb 23 13:24:07 crc kubenswrapper[4851]: E0223 13:24:07.839581 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" podUID="6769f01c-bcc7-4e3e-a791-0fa315f82b37" Feb 23 13:24:07 crc kubenswrapper[4851]: E0223 13:24:07.839904 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" podUID="f9f540e9-5c10-4e33-b283-328276817914" Feb 23 13:24:07 crc kubenswrapper[4851]: E0223 13:24:07.839956 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" podUID="ddcb4697-a6af-4baa-bd78-ae1f3b47c6af" Feb 23 13:24:07 crc kubenswrapper[4851]: E0223 13:24:07.844033 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" podUID="946a66f3-be29-4e8b-a800-637ef24a5694" Feb 23 13:24:07 crc kubenswrapper[4851]: E0223 13:24:07.844089 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" podUID="18ea2332-4904-4213-9ba2-c678a2125b37" Feb 23 13:24:08 crc kubenswrapper[4851]: I0223 13:24:08.480806 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:08 crc kubenswrapper[4851]: E0223 13:24:08.481003 4851 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:08 crc kubenswrapper[4851]: E0223 13:24:08.481093 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert podName:834a522f-ca03-403d-8402-679845f7c6c3 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:12.481071956 +0000 UTC m=+1007.162775634 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert") pod "infra-operator-controller-manager-79d975b745-b827v" (UID: "834a522f-ca03-403d-8402-679845f7c6c3") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:08 crc kubenswrapper[4851]: I0223 13:24:08.683996 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:08 crc kubenswrapper[4851]: E0223 13:24:08.684189 4851 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:08 crc kubenswrapper[4851]: E0223 13:24:08.684265 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert podName:e289a048-8c1a-4349-8b3b-8f3628e23bdc nodeName:}" failed. No retries permitted until 2026-02-23 13:24:12.684244786 +0000 UTC m=+1007.365948464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" (UID: "e289a048-8c1a-4349-8b3b-8f3628e23bdc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:09 crc kubenswrapper[4851]: I0223 13:24:09.089229 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:09 crc kubenswrapper[4851]: I0223 13:24:09.089305 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:09 crc kubenswrapper[4851]: E0223 13:24:09.089464 4851 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:24:09 crc kubenswrapper[4851]: E0223 13:24:09.089563 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:13.089540108 +0000 UTC m=+1007.771243786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "metrics-server-cert" not found Feb 23 13:24:09 crc kubenswrapper[4851]: E0223 13:24:09.089564 4851 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:24:09 crc kubenswrapper[4851]: E0223 13:24:09.089657 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:13.08963559 +0000 UTC m=+1007.771339268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "webhook-server-cert" not found Feb 23 13:24:12 crc kubenswrapper[4851]: I0223 13:24:12.539573 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:12 crc kubenswrapper[4851]: E0223 13:24:12.539753 4851 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:12 crc kubenswrapper[4851]: E0223 13:24:12.539975 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert podName:834a522f-ca03-403d-8402-679845f7c6c3 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:20.539953906 +0000 UTC m=+1015.221657584 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert") pod "infra-operator-controller-manager-79d975b745-b827v" (UID: "834a522f-ca03-403d-8402-679845f7c6c3") : secret "infra-operator-webhook-server-cert" not found Feb 23 13:24:12 crc kubenswrapper[4851]: I0223 13:24:12.742879 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:12 crc kubenswrapper[4851]: E0223 13:24:12.743052 4851 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:12 crc kubenswrapper[4851]: E0223 13:24:12.743176 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert podName:e289a048-8c1a-4349-8b3b-8f3628e23bdc nodeName:}" failed. No retries permitted until 2026-02-23 13:24:20.743152218 +0000 UTC m=+1015.424855896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" (UID: "e289a048-8c1a-4349-8b3b-8f3628e23bdc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 13:24:13 crc kubenswrapper[4851]: I0223 13:24:13.147368 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:13 crc kubenswrapper[4851]: I0223 13:24:13.147431 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:13 crc kubenswrapper[4851]: E0223 13:24:13.147544 4851 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:24:13 crc kubenswrapper[4851]: E0223 13:24:13.147544 4851 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:24:13 crc kubenswrapper[4851]: E0223 13:24:13.147603 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:21.147588985 +0000 UTC m=+1015.829292663 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "webhook-server-cert" not found Feb 23 13:24:13 crc kubenswrapper[4851]: E0223 13:24:13.147618 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:21.147612665 +0000 UTC m=+1015.829316343 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "metrics-server-cert" not found Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.891701 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" event={"ID":"f17a63ea-4b87-429b-8c90-58790c572b9e","Type":"ContainerStarted","Data":"23842390f589d05536f85d454068aa0e809b633257b931d7bc63b283a16d40f0"} Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.892838 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.893713 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" event={"ID":"40f2272b-7e63-4666-b858-9722a0af16c8","Type":"ContainerStarted","Data":"1707ca13007ec39e2dcc15d9175a53c631842bc92ec6166ce4afe792a9cd068a"} Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.894385 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.905843 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" event={"ID":"fbc0edce-88b5-4ddc-8495-01e33e7a7753","Type":"ContainerStarted","Data":"8af0a90259a8766dbe458d6f6e55bd493216166c7adbe9a54b9ba0176231f078"} Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.906130 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.921761 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" event={"ID":"115cc313-eea6-40cd-9e8a-a7205e83cc07","Type":"ContainerStarted","Data":"4c092e10d60ed62d93c091a40075591d30c0444db3daa2cc35679fa0b2587f27"} Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.921988 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.925816 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" podStartSLOduration=2.427805001 podStartE2EDuration="12.925792411s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:05.93384443 +0000 UTC m=+1000.615548108" lastFinishedPulling="2026-02-23 13:24:16.43183184 +0000 UTC m=+1011.113535518" observedRunningTime="2026-02-23 13:24:16.914699407 +0000 UTC m=+1011.596403115" watchObservedRunningTime="2026-02-23 13:24:16.925792411 +0000 UTC m=+1011.607496089" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.928043 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" event={"ID":"ca30fe6b-5b33-4e6e-acb5-93a49ae9257d","Type":"ContainerStarted","Data":"ee5251468dfbcb8c88d2471340e0d2e5b84c56841ded2a601551608e2f947625"} Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.928182 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.930061 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.943965 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" podStartSLOduration=2.694429688 podStartE2EDuration="12.943949745s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.187686625 +0000 UTC m=+1000.869390303" lastFinishedPulling="2026-02-23 13:24:16.437206682 +0000 UTC m=+1011.118910360" observedRunningTime="2026-02-23 13:24:16.943200523 +0000 UTC m=+1011.624904211" watchObservedRunningTime="2026-02-23 13:24:16.943949745 +0000 UTC m=+1011.625653423" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.947203 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" event={"ID":"84a8d9f7-24b2-4f08-a917-b614dc537ffe","Type":"ContainerStarted","Data":"6556408efdb13e938dbfc2d5061f12c92f53f9d9850d82012d363e4a6eee2719"} Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.947693 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.968105 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" podStartSLOduration=2.588722226 podStartE2EDuration="12.968085308s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.023688183 +0000 UTC m=+1000.705391861" lastFinishedPulling="2026-02-23 13:24:16.403051265 +0000 UTC m=+1011.084754943" observedRunningTime="2026-02-23 13:24:16.963738595 +0000 UTC m=+1011.645442293" watchObservedRunningTime="2026-02-23 13:24:16.968085308 +0000 UTC m=+1011.649788986" Feb 23 13:24:16 crc kubenswrapper[4851]: I0223 13:24:16.996785 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" podStartSLOduration=2.632654589 podStartE2EDuration="12.996759019s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.049192965 +0000 UTC m=+1000.730896643" lastFinishedPulling="2026-02-23 13:24:16.413297395 +0000 UTC m=+1011.095001073" observedRunningTime="2026-02-23 13:24:16.98901235 +0000 UTC m=+1011.670716038" watchObservedRunningTime="2026-02-23 13:24:16.996759019 +0000 UTC m=+1011.678462697" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.045771 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" podStartSLOduration=2.867202248 podStartE2EDuration="13.045747946s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.224291971 +0000 UTC m=+1000.905995649" lastFinishedPulling="2026-02-23 13:24:16.402837669 +0000 UTC m=+1011.084541347" observedRunningTime="2026-02-23 13:24:17.035767453 +0000 UTC m=+1011.717471141" watchObservedRunningTime="2026-02-23 13:24:17.045747946 +0000 UTC m=+1011.727451624" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.081625 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" podStartSLOduration=2.258789918 podStartE2EDuration="13.081606791s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:05.576594219 +0000 UTC m=+1000.258297897" lastFinishedPulling="2026-02-23 13:24:16.399411092 +0000 UTC m=+1011.081114770" observedRunningTime="2026-02-23 13:24:17.074388627 +0000 UTC m=+1011.756092325" watchObservedRunningTime="2026-02-23 13:24:17.081606791 +0000 UTC m=+1011.763310469" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.956320 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" event={"ID":"9abe19ef-7cfa-43dd-983c-bcef5a540100","Type":"ContainerStarted","Data":"d8af88935261bc5757d0288f1cc4cf522db8e5f560a9053e0e3bf60ad1fa6ea5"} Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.958172 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" event={"ID":"33df439b-30ca-4397-a992-be2de607477a","Type":"ContainerStarted","Data":"dd673c8ae7175bff174d3b9388fb73b18939d44cea6a05fddda5eab3ea603349"} Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.958291 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.959941 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" event={"ID":"6fb817cf-5b9d-4879-a997-cd3f1d99db3c","Type":"ContainerStarted","Data":"1871325b321ad067e27f4392904cc3dbc5032d34617817f4db21f9f9917948cd"} Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.960398 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.965875 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" event={"ID":"2a0bac92-ab56-4f67-a3a8-09ea4de25ae5","Type":"ContainerStarted","Data":"6c97016e03941a14539568af3b8dc73edd8464e056cca2781bdb54b902580a33"} Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.966440 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.981552 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" event={"ID":"ea9b35cb-5758-42d8-8877-ceb1e19eb751","Type":"ContainerStarted","Data":"60e73a4fad5018ff4b5e0eced68e40a332302b19215dca31757e3cf1c2a23dd7"} Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.981614 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.981625 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" event={"ID":"ef730879-0a7d-4e4a-925e-8ef30c366d64","Type":"ContainerStarted","Data":"aa0ff42787bf4440ba1b652db3b961c96062cd24895c8c41c85c726240796164"} Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.981654 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.981666 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.981674 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" event={"ID":"c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44","Type":"ContainerStarted","Data":"c6e726b23e22944e4b2b0ede32549e38164f2365819c82601c9c71322738479e"} Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.982174 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" podStartSLOduration=3.147498441 podStartE2EDuration="13.982155749s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:05.604704865 +0000 UTC m=+1000.286408543" lastFinishedPulling="2026-02-23 13:24:16.439362173 +0000 UTC m=+1011.121065851" observedRunningTime="2026-02-23 13:24:17.242864975 +0000 UTC m=+1011.924568653" watchObservedRunningTime="2026-02-23 13:24:17.982155749 +0000 UTC m=+1012.663859427" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.982483 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" event={"ID":"0dbb5228-ae4a-427d-97a2-3768b460e134","Type":"ContainerStarted","Data":"bc7746e0d87d9349a228e497d199d214b913e65b4f7769782f6035862c25321d"} Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.984962 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" podStartSLOduration=3.520683803 podStartE2EDuration="13.984951468s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:05.972365771 +0000 UTC m=+1000.654069439" lastFinishedPulling="2026-02-23 13:24:16.436633426 +0000 UTC m=+1011.118337104" observedRunningTime="2026-02-23 13:24:17.981281494 +0000 UTC m=+1012.662985182" watchObservedRunningTime="2026-02-23 13:24:17.984951468 +0000 UTC m=+1012.666655146" Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.985066 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" event={"ID":"71fd5f4f-a9fc-4242-813a-3fb7d5827c41","Type":"ContainerStarted","Data":"2ffad7878bf1c007c5e3ba27c532533cdce3fd91908696aaa59ea39b767b7d28"} Feb 23 13:24:17 crc kubenswrapper[4851]: I0223 13:24:17.985087 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" Feb 23 13:24:18 crc kubenswrapper[4851]: I0223 13:24:18.013135 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" podStartSLOduration=3.921582819 podStartE2EDuration="14.013117875s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.322978344 +0000 UTC m=+1001.004682022" lastFinishedPulling="2026-02-23 13:24:16.4145134 +0000 UTC m=+1011.096217078" observedRunningTime="2026-02-23 13:24:18.012744415 +0000 UTC m=+1012.694448103" watchObservedRunningTime="2026-02-23 13:24:18.013117875 +0000 UTC m=+1012.694821553" Feb 23 13:24:18 crc kubenswrapper[4851]: I0223 13:24:18.037321 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" podStartSLOduration=3.631551631 podStartE2EDuration="14.03730441s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.03416952 +0000 UTC m=+1000.715873198" lastFinishedPulling="2026-02-23 13:24:16.439922299 +0000 UTC m=+1011.121625977" observedRunningTime="2026-02-23 13:24:18.036151197 +0000 UTC m=+1012.717854875" watchObservedRunningTime="2026-02-23 13:24:18.03730441 +0000 UTC m=+1012.719008088" Feb 23 13:24:18 crc kubenswrapper[4851]: I0223 13:24:18.083895 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" podStartSLOduration=3.693874575 podStartE2EDuration="14.083869048s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.023316473 +0000 UTC m=+1000.705020161" lastFinishedPulling="2026-02-23 13:24:16.413310956 +0000 UTC m=+1011.095014634" observedRunningTime="2026-02-23 13:24:18.079061192 +0000 UTC m=+1012.760764880" watchObservedRunningTime="2026-02-23 13:24:18.083869048 +0000 UTC m=+1012.765572726" Feb 23 13:24:18 crc kubenswrapper[4851]: I0223 13:24:18.116760 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" podStartSLOduration=3.2087839049999998 podStartE2EDuration="14.116733568s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:05.458515967 +0000 UTC m=+1000.140219645" lastFinishedPulling="2026-02-23 13:24:16.36646563 +0000 UTC m=+1011.048169308" observedRunningTime="2026-02-23 13:24:18.11117203 +0000 UTC m=+1012.792875708" watchObservedRunningTime="2026-02-23 13:24:18.116733568 +0000 UTC m=+1012.798437256" Feb 23 13:24:18 crc kubenswrapper[4851]: I0223 13:24:18.136590 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" podStartSLOduration=3.164227967 podStartE2EDuration="13.136569449s" podCreationTimestamp="2026-02-23 13:24:05 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.489798406 +0000 UTC m=+1001.171502084" lastFinishedPulling="2026-02-23 13:24:16.462139888 +0000 UTC m=+1011.143843566" observedRunningTime="2026-02-23 13:24:18.133608795 +0000 UTC m=+1012.815312483" watchObservedRunningTime="2026-02-23 13:24:18.136569449 +0000 UTC m=+1012.818273127" Feb 23 13:24:18 crc kubenswrapper[4851]: I0223 13:24:18.153888 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" podStartSLOduration=3.960960584 podStartE2EDuration="14.153868689s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.245601804 +0000 UTC m=+1000.927305482" lastFinishedPulling="2026-02-23 13:24:16.438509909 +0000 UTC m=+1011.120213587" observedRunningTime="2026-02-23 13:24:18.150497723 +0000 UTC m=+1012.832201421" watchObservedRunningTime="2026-02-23 13:24:18.153868689 +0000 UTC m=+1012.835572367" Feb 23 13:24:18 crc kubenswrapper[4851]: I0223 13:24:18.175113 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" podStartSLOduration=3.637883391 podStartE2EDuration="14.1750941s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:05.970665193 +0000 UTC m=+1000.652368871" lastFinishedPulling="2026-02-23 13:24:16.507875892 +0000 UTC m=+1011.189579580" observedRunningTime="2026-02-23 13:24:18.174090501 +0000 UTC m=+1012.855794189" watchObservedRunningTime="2026-02-23 13:24:18.1750941 +0000 UTC m=+1012.856797778" Feb 23 13:24:18 crc kubenswrapper[4851]: I0223 13:24:18.993632 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" Feb 23 13:24:20 crc kubenswrapper[4851]: I0223 13:24:20.592851 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:20 crc kubenswrapper[4851]: I0223 13:24:20.598382 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/834a522f-ca03-403d-8402-679845f7c6c3-cert\") pod \"infra-operator-controller-manager-79d975b745-b827v\" (UID: \"834a522f-ca03-403d-8402-679845f7c6c3\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:20 crc kubenswrapper[4851]: I0223 13:24:20.795163 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:20 crc kubenswrapper[4851]: I0223 13:24:20.798817 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e289a048-8c1a-4349-8b3b-8f3628e23bdc-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm\" (UID: \"e289a048-8c1a-4349-8b3b-8f3628e23bdc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:20 crc kubenswrapper[4851]: I0223 13:24:20.882969 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:20 crc kubenswrapper[4851]: I0223 13:24:20.972389 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:21 crc kubenswrapper[4851]: I0223 13:24:21.201596 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:21 crc kubenswrapper[4851]: I0223 13:24:21.201664 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:21 crc kubenswrapper[4851]: E0223 13:24:21.201800 4851 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 13:24:21 crc kubenswrapper[4851]: E0223 13:24:21.201863 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:37.201845148 +0000 UTC m=+1031.883548826 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "webhook-server-cert" not found Feb 23 13:24:21 crc kubenswrapper[4851]: E0223 13:24:21.201925 4851 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 13:24:21 crc kubenswrapper[4851]: E0223 13:24:21.201953 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs podName:7a7fd548-a78f-4096-b68a-2bc28b937e96 nodeName:}" failed. No retries permitted until 2026-02-23 13:24:37.201944441 +0000 UTC m=+1031.883648119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs") pod "openstack-operator-controller-manager-68bc894585-xr5dt" (UID: "7a7fd548-a78f-4096-b68a-2bc28b937e96") : secret "metrics-server-cert" not found Feb 23 13:24:21 crc kubenswrapper[4851]: I0223 13:24:21.757236 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-b827v"] Feb 23 13:24:21 crc kubenswrapper[4851]: W0223 13:24:21.762854 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod834a522f_ca03_403d_8402_679845f7c6c3.slice/crio-44302c8328920a9f424c84963ea9cdae7f81182446cac9dad0686fa93ebd6def WatchSource:0}: Error finding container 44302c8328920a9f424c84963ea9cdae7f81182446cac9dad0686fa93ebd6def: Status 404 returned error can't find the container with id 44302c8328920a9f424c84963ea9cdae7f81182446cac9dad0686fa93ebd6def Feb 23 13:24:21 crc kubenswrapper[4851]: I0223 13:24:21.926212 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm"] Feb 23 13:24:21 crc kubenswrapper[4851]: W0223 13:24:21.930719 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode289a048_8c1a_4349_8b3b_8f3628e23bdc.slice/crio-4c33a54e2f5314888e69455ed23050776094c366b903a767cac5d0ea1c307350 WatchSource:0}: Error finding container 4c33a54e2f5314888e69455ed23050776094c366b903a767cac5d0ea1c307350: Status 404 returned error can't find the container with id 4c33a54e2f5314888e69455ed23050776094c366b903a767cac5d0ea1c307350 Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.012495 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" event={"ID":"f9f540e9-5c10-4e33-b283-328276817914","Type":"ContainerStarted","Data":"3655d1b6dd335dfe1934b6fb1f9d825a2ad13e670327e115bc25b70660763f30"} Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.012756 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.013513 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" event={"ID":"e289a048-8c1a-4349-8b3b-8f3628e23bdc","Type":"ContainerStarted","Data":"4c33a54e2f5314888e69455ed23050776094c366b903a767cac5d0ea1c307350"} Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.015205 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" event={"ID":"834a522f-ca03-403d-8402-679845f7c6c3","Type":"ContainerStarted","Data":"44302c8328920a9f424c84963ea9cdae7f81182446cac9dad0686fa93ebd6def"} Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.016856 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" event={"ID":"6769f01c-bcc7-4e3e-a791-0fa315f82b37","Type":"ContainerStarted","Data":"59dff1df00343474baf9fffa7e2cf7c52e3ff7760b96bfd4747e291234627bc3"} Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.017051 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.018066 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" event={"ID":"946a66f3-be29-4e8b-a800-637ef24a5694","Type":"ContainerStarted","Data":"3fc7cc2f8a9bfd539766f4b882b5c534f747e2516e94fed43f494d1a3d4537ea"} Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.018242 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.033353 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" podStartSLOduration=2.923167921 podStartE2EDuration="18.03330546s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.362147533 +0000 UTC m=+1001.043851211" lastFinishedPulling="2026-02-23 13:24:21.472285072 +0000 UTC m=+1016.153988750" observedRunningTime="2026-02-23 13:24:22.030861311 +0000 UTC m=+1016.712564999" watchObservedRunningTime="2026-02-23 13:24:22.03330546 +0000 UTC m=+1016.715009148" Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.057819 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" podStartSLOduration=3.006192061 podStartE2EDuration="18.057795953s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.416026618 +0000 UTC m=+1001.097730296" lastFinishedPulling="2026-02-23 13:24:21.46763051 +0000 UTC m=+1016.149334188" observedRunningTime="2026-02-23 13:24:22.049390375 +0000 UTC m=+1016.731094083" watchObservedRunningTime="2026-02-23 13:24:22.057795953 +0000 UTC m=+1016.739499631" Feb 23 13:24:22 crc kubenswrapper[4851]: I0223 13:24:22.069180 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" podStartSLOduration=2.96376873 podStartE2EDuration="18.069158555s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.362242956 +0000 UTC m=+1001.043946624" lastFinishedPulling="2026-02-23 13:24:21.467632771 +0000 UTC m=+1016.149336449" observedRunningTime="2026-02-23 13:24:22.067956921 +0000 UTC m=+1016.749660619" watchObservedRunningTime="2026-02-23 13:24:22.069158555 +0000 UTC m=+1016.750862233" Feb 23 13:24:24 crc kubenswrapper[4851]: I0223 13:24:24.849110 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-nhj9r" Feb 23 13:24:24 crc kubenswrapper[4851]: I0223 13:24:24.871969 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-2xlr5" Feb 23 13:24:24 crc kubenswrapper[4851]: I0223 13:24:24.894964 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-k8pws" Feb 23 13:24:24 crc kubenswrapper[4851]: I0223 13:24:24.917619 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-rm79x" Feb 23 13:24:24 crc kubenswrapper[4851]: I0223 13:24:24.929381 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-tvt8g" Feb 23 13:24:24 crc kubenswrapper[4851]: I0223 13:24:24.951187 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-8wdqc" Feb 23 13:24:25 crc kubenswrapper[4851]: I0223 13:24:25.109926 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bctck" Feb 23 13:24:25 crc kubenswrapper[4851]: I0223 13:24:25.117793 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-pd5bf" Feb 23 13:24:25 crc kubenswrapper[4851]: I0223 13:24:25.165784 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-sd26k" Feb 23 13:24:25 crc kubenswrapper[4851]: I0223 13:24:25.225634 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vrjqg" Feb 23 13:24:25 crc kubenswrapper[4851]: I0223 13:24:25.308250 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-rbgkf" Feb 23 13:24:25 crc kubenswrapper[4851]: I0223 13:24:25.332782 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-65dws" Feb 23 13:24:25 crc kubenswrapper[4851]: I0223 13:24:25.357798 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-x2gtd" Feb 23 13:24:25 crc kubenswrapper[4851]: I0223 13:24:25.470465 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-khm2l" Feb 23 13:24:25 crc kubenswrapper[4851]: I0223 13:24:25.628244 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gdn69" Feb 23 13:24:27 crc kubenswrapper[4851]: I0223 13:24:27.076883 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" event={"ID":"e289a048-8c1a-4349-8b3b-8f3628e23bdc","Type":"ContainerStarted","Data":"9a80083541fdbb50d84e73819b95432cb0d594fa6c300094c286db1ddf4125a9"} Feb 23 13:24:27 crc kubenswrapper[4851]: I0223 13:24:27.077308 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:27 crc kubenswrapper[4851]: I0223 13:24:27.086913 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" event={"ID":"834a522f-ca03-403d-8402-679845f7c6c3","Type":"ContainerStarted","Data":"ba0a01e01c540e2b8b906841ab8ceb8fc31c12227af9b042bbfd6130c2a4f6db"} Feb 23 13:24:27 crc kubenswrapper[4851]: I0223 13:24:27.087056 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:27 crc kubenswrapper[4851]: I0223 13:24:27.093489 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" event={"ID":"ddcb4697-a6af-4baa-bd78-ae1f3b47c6af","Type":"ContainerStarted","Data":"a2f6db667e69b284b6f5afb59a5dea543500487715a7a2d96b58a39244fcd806"} Feb 23 13:24:27 crc kubenswrapper[4851]: I0223 13:24:27.093702 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" Feb 23 13:24:27 crc kubenswrapper[4851]: I0223 13:24:27.108585 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" podStartSLOduration=18.543798909 podStartE2EDuration="23.108564218s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:21.932783485 +0000 UTC m=+1016.614487163" lastFinishedPulling="2026-02-23 13:24:26.497548794 +0000 UTC m=+1021.179252472" observedRunningTime="2026-02-23 13:24:27.105549462 +0000 UTC m=+1021.787253160" watchObservedRunningTime="2026-02-23 13:24:27.108564218 +0000 UTC m=+1021.790267896" Feb 23 13:24:27 crc kubenswrapper[4851]: I0223 13:24:27.128446 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" podStartSLOduration=2.952613415 podStartE2EDuration="23.12842812s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.322947203 +0000 UTC m=+1001.004650881" lastFinishedPulling="2026-02-23 13:24:26.498761898 +0000 UTC m=+1021.180465586" observedRunningTime="2026-02-23 13:24:27.124084037 +0000 UTC m=+1021.805787715" watchObservedRunningTime="2026-02-23 13:24:27.12842812 +0000 UTC m=+1021.810131788" Feb 23 13:24:27 crc kubenswrapper[4851]: I0223 13:24:27.141897 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" podStartSLOduration=18.409017774 podStartE2EDuration="23.141876s" podCreationTimestamp="2026-02-23 13:24:04 +0000 UTC" firstStartedPulling="2026-02-23 13:24:21.764850002 +0000 UTC m=+1016.446553680" lastFinishedPulling="2026-02-23 13:24:26.497708228 +0000 UTC m=+1021.179411906" observedRunningTime="2026-02-23 13:24:27.137843556 +0000 UTC m=+1021.819547254" watchObservedRunningTime="2026-02-23 13:24:27.141876 +0000 UTC m=+1021.823579678" Feb 23 13:24:35 crc kubenswrapper[4851]: I0223 13:24:35.391530 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-x9kh8" Feb 23 13:24:35 crc kubenswrapper[4851]: I0223 13:24:35.433157 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hktpx" Feb 23 13:24:35 crc kubenswrapper[4851]: I0223 13:24:35.455116 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bwbpw" Feb 23 13:24:35 crc kubenswrapper[4851]: I0223 13:24:35.614784 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-zktk2" Feb 23 13:24:36 crc kubenswrapper[4851]: I0223 13:24:36.156156 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" event={"ID":"18ea2332-4904-4213-9ba2-c678a2125b37","Type":"ContainerStarted","Data":"12b74f78b40e934224d0265657b5d8c86f9db2bd55f073ea54c12370fd15b2d7"} Feb 23 13:24:37 crc kubenswrapper[4851]: I0223 13:24:37.186562 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-sv2f9" podStartSLOduration=9.621676107 podStartE2EDuration="32.186541339s" podCreationTimestamp="2026-02-23 13:24:05 +0000 UTC" firstStartedPulling="2026-02-23 13:24:06.412102507 +0000 UTC m=+1001.093806185" lastFinishedPulling="2026-02-23 13:24:28.976967739 +0000 UTC m=+1023.658671417" observedRunningTime="2026-02-23 13:24:37.179911362 +0000 UTC m=+1031.861615060" watchObservedRunningTime="2026-02-23 13:24:37.186541339 +0000 UTC m=+1031.868245017" Feb 23 13:24:37 crc kubenswrapper[4851]: I0223 13:24:37.247689 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:37 crc kubenswrapper[4851]: I0223 13:24:37.247754 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:37 crc kubenswrapper[4851]: I0223 13:24:37.253678 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-metrics-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:37 crc kubenswrapper[4851]: I0223 13:24:37.254388 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a7fd548-a78f-4096-b68a-2bc28b937e96-webhook-certs\") pod \"openstack-operator-controller-manager-68bc894585-xr5dt\" (UID: \"7a7fd548-a78f-4096-b68a-2bc28b937e96\") " pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:37 crc kubenswrapper[4851]: I0223 13:24:37.446449 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-42hjd" Feb 23 13:24:37 crc kubenswrapper[4851]: I0223 13:24:37.454846 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:37 crc kubenswrapper[4851]: I0223 13:24:37.882482 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt"] Feb 23 13:24:37 crc kubenswrapper[4851]: W0223 13:24:37.884664 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a7fd548_a78f_4096_b68a_2bc28b937e96.slice/crio-60ec766491c432f7c954f0471874dc81886bdffbe98b2b5a44e7455d01123b09 WatchSource:0}: Error finding container 60ec766491c432f7c954f0471874dc81886bdffbe98b2b5a44e7455d01123b09: Status 404 returned error can't find the container with id 60ec766491c432f7c954f0471874dc81886bdffbe98b2b5a44e7455d01123b09 Feb 23 13:24:38 crc kubenswrapper[4851]: I0223 13:24:38.168426 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" event={"ID":"7a7fd548-a78f-4096-b68a-2bc28b937e96","Type":"ContainerStarted","Data":"8657c6631708839d850d067e37f997584672be014e9512c663f5387dd1199bca"} Feb 23 13:24:38 crc kubenswrapper[4851]: I0223 13:24:38.168466 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" event={"ID":"7a7fd548-a78f-4096-b68a-2bc28b937e96","Type":"ContainerStarted","Data":"60ec766491c432f7c954f0471874dc81886bdffbe98b2b5a44e7455d01123b09"} Feb 23 13:24:38 crc kubenswrapper[4851]: I0223 13:24:38.168553 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:24:38 crc kubenswrapper[4851]: I0223 13:24:38.191868 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" podStartSLOduration=33.191842053 podStartE2EDuration="33.191842053s" podCreationTimestamp="2026-02-23 13:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:24:38.188065106 +0000 UTC m=+1032.869768814" watchObservedRunningTime="2026-02-23 13:24:38.191842053 +0000 UTC m=+1032.873545731" Feb 23 13:24:40 crc kubenswrapper[4851]: I0223 13:24:40.895507 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-b827v" Feb 23 13:24:40 crc kubenswrapper[4851]: I0223 13:24:40.980727 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm" Feb 23 13:24:47 crc kubenswrapper[4851]: I0223 13:24:47.462298 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68bc894585-xr5dt" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.504902 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-652f5"] Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.507960 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.510264 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.510309 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.510845 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.510975 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-8hcls" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.520716 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-652f5"] Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.572782 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t8hl9"] Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.573975 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.575800 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.592059 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t8hl9"] Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.615139 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fwp\" (UniqueName: \"kubernetes.io/projected/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-kube-api-access-q4fwp\") pod \"dnsmasq-dns-675f4bcbfc-652f5\" (UID: \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.615238 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-config\") pod \"dnsmasq-dns-675f4bcbfc-652f5\" (UID: \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.716663 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-config\") pod \"dnsmasq-dns-675f4bcbfc-652f5\" (UID: \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.716730 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-config\") pod \"dnsmasq-dns-78dd6ddcc-t8hl9\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.716759 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6gg2\" (UniqueName: \"kubernetes.io/projected/2a5341db-3c12-4a17-8f03-78c8dad9379d-kube-api-access-b6gg2\") pod \"dnsmasq-dns-78dd6ddcc-t8hl9\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.716786 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-t8hl9\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.716811 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4fwp\" (UniqueName: \"kubernetes.io/projected/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-kube-api-access-q4fwp\") pod \"dnsmasq-dns-675f4bcbfc-652f5\" (UID: \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.717877 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-config\") pod \"dnsmasq-dns-675f4bcbfc-652f5\" (UID: \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.737806 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4fwp\" (UniqueName: \"kubernetes.io/projected/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-kube-api-access-q4fwp\") pod \"dnsmasq-dns-675f4bcbfc-652f5\" (UID: \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.818554 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6gg2\" (UniqueName: \"kubernetes.io/projected/2a5341db-3c12-4a17-8f03-78c8dad9379d-kube-api-access-b6gg2\") pod \"dnsmasq-dns-78dd6ddcc-t8hl9\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.818670 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-t8hl9\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.818876 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-config\") pod \"dnsmasq-dns-78dd6ddcc-t8hl9\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.819848 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-t8hl9\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.820584 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-config\") pod \"dnsmasq-dns-78dd6ddcc-t8hl9\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.838978 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6gg2\" (UniqueName: \"kubernetes.io/projected/2a5341db-3c12-4a17-8f03-78c8dad9379d-kube-api-access-b6gg2\") pod \"dnsmasq-dns-78dd6ddcc-t8hl9\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.841299 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:03 crc kubenswrapper[4851]: I0223 13:25:03.890904 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:04 crc kubenswrapper[4851]: I0223 13:25:04.083239 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-652f5"] Feb 23 13:25:04 crc kubenswrapper[4851]: I0223 13:25:04.342250 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" event={"ID":"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b","Type":"ContainerStarted","Data":"2cb3a2bd17af1aa78ceaf1aece8e25fc3d811db8a45dc42f223a801fae302b52"} Feb 23 13:25:04 crc kubenswrapper[4851]: I0223 13:25:04.354062 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t8hl9"] Feb 23 13:25:04 crc kubenswrapper[4851]: W0223 13:25:04.357761 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a5341db_3c12_4a17_8f03_78c8dad9379d.slice/crio-174d8853df394850427c78c77b8696fb11dcbdec57a02404baace78002e86925 WatchSource:0}: Error finding container 174d8853df394850427c78c77b8696fb11dcbdec57a02404baace78002e86925: Status 404 returned error can't find the container with id 174d8853df394850427c78c77b8696fb11dcbdec57a02404baace78002e86925 Feb 23 13:25:05 crc kubenswrapper[4851]: I0223 13:25:05.351535 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" event={"ID":"2a5341db-3c12-4a17-8f03-78c8dad9379d","Type":"ContainerStarted","Data":"174d8853df394850427c78c77b8696fb11dcbdec57a02404baace78002e86925"} Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.150261 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-652f5"] Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.182011 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9xb8s"] Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.183850 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.211625 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9xb8s"] Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.256428 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxt9\" (UniqueName: \"kubernetes.io/projected/a1a68000-b534-4591-90e2-a44e650c15e1-kube-api-access-nwxt9\") pod \"dnsmasq-dns-666b6646f7-9xb8s\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.256546 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-config\") pod \"dnsmasq-dns-666b6646f7-9xb8s\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.256677 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9xb8s\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.358570 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwxt9\" (UniqueName: \"kubernetes.io/projected/a1a68000-b534-4591-90e2-a44e650c15e1-kube-api-access-nwxt9\") pod \"dnsmasq-dns-666b6646f7-9xb8s\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.358665 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-config\") pod \"dnsmasq-dns-666b6646f7-9xb8s\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.358716 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9xb8s\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.362399 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9xb8s\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.364877 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-config\") pod \"dnsmasq-dns-666b6646f7-9xb8s\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.402596 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwxt9\" (UniqueName: \"kubernetes.io/projected/a1a68000-b534-4591-90e2-a44e650c15e1-kube-api-access-nwxt9\") pod \"dnsmasq-dns-666b6646f7-9xb8s\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.486777 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t8hl9"] Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.514516 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rhthr"] Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.521972 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.526749 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.527589 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rhthr"] Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.563414 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnlqt\" (UniqueName: \"kubernetes.io/projected/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-kube-api-access-xnlqt\") pod \"dnsmasq-dns-57d769cc4f-rhthr\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.563949 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-config\") pod \"dnsmasq-dns-57d769cc4f-rhthr\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.564041 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rhthr\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.665622 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnlqt\" (UniqueName: \"kubernetes.io/projected/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-kube-api-access-xnlqt\") pod \"dnsmasq-dns-57d769cc4f-rhthr\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.665750 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-config\") pod \"dnsmasq-dns-57d769cc4f-rhthr\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.665823 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rhthr\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.667091 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-rhthr\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.668126 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-config\") pod \"dnsmasq-dns-57d769cc4f-rhthr\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.696329 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnlqt\" (UniqueName: \"kubernetes.io/projected/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-kube-api-access-xnlqt\") pod \"dnsmasq-dns-57d769cc4f-rhthr\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:06 crc kubenswrapper[4851]: I0223 13:25:06.859046 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.021479 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9xb8s"] Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.322940 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rhthr"] Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.337463 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.338885 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.341118 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hh5rw" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.341415 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.341605 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.341768 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.343559 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.345577 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.348778 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.372288 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377569 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db9mn\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-kube-api-access-db9mn\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377620 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46bf34c9-f0ec-4de6-ae40-fd334c23af27-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377667 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377688 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377751 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377790 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46bf34c9-f0ec-4de6-ae40-fd334c23af27-pod-info\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377814 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377835 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377860 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-config-data\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377888 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-server-conf\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.377995 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.378096 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" event={"ID":"a1a68000-b534-4591-90e2-a44e650c15e1","Type":"ContainerStarted","Data":"611ed65f240d5b29511ca894ded4fe57b8ee79e92421a1dd1ba4f1d226863c3c"} Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480017 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db9mn\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-kube-api-access-db9mn\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480092 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46bf34c9-f0ec-4de6-ae40-fd334c23af27-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480144 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480167 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480197 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480248 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46bf34c9-f0ec-4de6-ae40-fd334c23af27-pod-info\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480301 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480322 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480366 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-config-data\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480388 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-server-conf\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.480403 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.483653 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.483678 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.484021 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.484021 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-config-data\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.484122 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.485528 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-server-conf\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.487138 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.487868 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46bf34c9-f0ec-4de6-ae40-fd334c23af27-pod-info\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.488854 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46bf34c9-f0ec-4de6-ae40-fd334c23af27-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.495344 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.498253 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db9mn\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-kube-api-access-db9mn\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.516565 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.644859 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.646466 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.653943 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.654729 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.654732 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.655366 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.655470 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8gl95" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.655896 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.657418 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.662552 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.673458 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.684522 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.684739 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.684823 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.684847 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.685001 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec010635-96e5-448a-98c1-e458fd6f31ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.685067 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.685170 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec010635-96e5-448a-98c1-e458fd6f31ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.685232 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.685293 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.685366 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf625\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-kube-api-access-vf625\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.685483 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.789262 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.789810 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.789839 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec010635-96e5-448a-98c1-e458fd6f31ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.789884 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.789929 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf625\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-kube-api-access-vf625\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.789989 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.790070 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.790146 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.790146 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.790175 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.790202 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.790280 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec010635-96e5-448a-98c1-e458fd6f31ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.791769 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.792091 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.793291 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.794094 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.797792 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.798411 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec010635-96e5-448a-98c1-e458fd6f31ed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.798471 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec010635-96e5-448a-98c1-e458fd6f31ed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.802626 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.811933 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.812639 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf625\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-kube-api-access-vf625\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.814317 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:07 crc kubenswrapper[4851]: I0223 13:25:07.982676 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:25:08 crc kubenswrapper[4851]: I0223 13:25:08.999138 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.000664 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.003997 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.004061 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.004708 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.005659 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zjwxd" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.009473 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.010500 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.141397 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0d61403-fda9-4081-8c39-32ff86cc879c-kolla-config\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.142020 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d61403-fda9-4081-8c39-32ff86cc879c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.142098 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.142125 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0d61403-fda9-4081-8c39-32ff86cc879c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.142199 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbtr4\" (UniqueName: \"kubernetes.io/projected/a0d61403-fda9-4081-8c39-32ff86cc879c-kube-api-access-dbtr4\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.142303 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0d61403-fda9-4081-8c39-32ff86cc879c-config-data-default\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.142374 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d61403-fda9-4081-8c39-32ff86cc879c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.143218 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d61403-fda9-4081-8c39-32ff86cc879c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.245049 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.245099 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0d61403-fda9-4081-8c39-32ff86cc879c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.245122 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbtr4\" (UniqueName: \"kubernetes.io/projected/a0d61403-fda9-4081-8c39-32ff86cc879c-kube-api-access-dbtr4\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.245144 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0d61403-fda9-4081-8c39-32ff86cc879c-config-data-default\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.245165 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d61403-fda9-4081-8c39-32ff86cc879c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.245210 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d61403-fda9-4081-8c39-32ff86cc879c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.245255 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0d61403-fda9-4081-8c39-32ff86cc879c-kolla-config\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.245269 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d61403-fda9-4081-8c39-32ff86cc879c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.246592 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.247519 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a0d61403-fda9-4081-8c39-32ff86cc879c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.249171 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a0d61403-fda9-4081-8c39-32ff86cc879c-config-data-default\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.249672 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d61403-fda9-4081-8c39-32ff86cc879c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.250892 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a0d61403-fda9-4081-8c39-32ff86cc879c-kolla-config\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.273529 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0d61403-fda9-4081-8c39-32ff86cc879c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.278290 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.278600 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbtr4\" (UniqueName: \"kubernetes.io/projected/a0d61403-fda9-4081-8c39-32ff86cc879c-kube-api-access-dbtr4\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.281806 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0d61403-fda9-4081-8c39-32ff86cc879c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a0d61403-fda9-4081-8c39-32ff86cc879c\") " pod="openstack/openstack-galera-0" Feb 23 13:25:09 crc kubenswrapper[4851]: I0223 13:25:09.339155 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.354343 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.355550 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.358870 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.359017 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.359040 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4xr8q" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.359402 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.374081 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.378206 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb3d0e2c-9427-4585-8f01-0e1640feca9a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.378246 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq567\" (UniqueName: \"kubernetes.io/projected/cb3d0e2c-9427-4585-8f01-0e1640feca9a-kube-api-access-dq567\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.378263 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb3d0e2c-9427-4585-8f01-0e1640feca9a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.378292 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.378547 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb3d0e2c-9427-4585-8f01-0e1640feca9a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.378635 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb3d0e2c-9427-4585-8f01-0e1640feca9a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.378670 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3d0e2c-9427-4585-8f01-0e1640feca9a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.378814 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb3d0e2c-9427-4585-8f01-0e1640feca9a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.480403 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb3d0e2c-9427-4585-8f01-0e1640feca9a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.480479 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb3d0e2c-9427-4585-8f01-0e1640feca9a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.480504 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq567\" (UniqueName: \"kubernetes.io/projected/cb3d0e2c-9427-4585-8f01-0e1640feca9a-kube-api-access-dq567\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.480525 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb3d0e2c-9427-4585-8f01-0e1640feca9a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.480556 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.480611 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb3d0e2c-9427-4585-8f01-0e1640feca9a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.480879 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.481068 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb3d0e2c-9427-4585-8f01-0e1640feca9a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.482146 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb3d0e2c-9427-4585-8f01-0e1640feca9a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.482874 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb3d0e2c-9427-4585-8f01-0e1640feca9a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.493373 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb3d0e2c-9427-4585-8f01-0e1640feca9a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.493425 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3d0e2c-9427-4585-8f01-0e1640feca9a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.496981 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb3d0e2c-9427-4585-8f01-0e1640feca9a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.498566 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb3d0e2c-9427-4585-8f01-0e1640feca9a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.501449 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq567\" (UniqueName: \"kubernetes.io/projected/cb3d0e2c-9427-4585-8f01-0e1640feca9a-kube-api-access-dq567\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.504652 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.514585 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb3d0e2c-9427-4585-8f01-0e1640feca9a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cb3d0e2c-9427-4585-8f01-0e1640feca9a\") " pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.618083 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.619205 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.623133 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-dhztl" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.623306 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.623455 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.627177 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.692730 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.695940 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aee353b8-8a37-4055-a016-2c1aac2cf20b-kolla-config\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.696015 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aee353b8-8a37-4055-a016-2c1aac2cf20b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.696060 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aee353b8-8a37-4055-a016-2c1aac2cf20b-config-data\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.696146 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee353b8-8a37-4055-a016-2c1aac2cf20b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.696210 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqjk\" (UniqueName: \"kubernetes.io/projected/aee353b8-8a37-4055-a016-2c1aac2cf20b-kube-api-access-hqqjk\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.796800 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aee353b8-8a37-4055-a016-2c1aac2cf20b-kolla-config\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.796908 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aee353b8-8a37-4055-a016-2c1aac2cf20b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.796935 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aee353b8-8a37-4055-a016-2c1aac2cf20b-config-data\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.797014 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee353b8-8a37-4055-a016-2c1aac2cf20b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.797088 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqjk\" (UniqueName: \"kubernetes.io/projected/aee353b8-8a37-4055-a016-2c1aac2cf20b-kube-api-access-hqqjk\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.797788 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aee353b8-8a37-4055-a016-2c1aac2cf20b-kolla-config\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.799715 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aee353b8-8a37-4055-a016-2c1aac2cf20b-config-data\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.800121 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/aee353b8-8a37-4055-a016-2c1aac2cf20b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.802001 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee353b8-8a37-4055-a016-2c1aac2cf20b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.821163 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqjk\" (UniqueName: \"kubernetes.io/projected/aee353b8-8a37-4055-a016-2c1aac2cf20b-kube-api-access-hqqjk\") pod \"memcached-0\" (UID: \"aee353b8-8a37-4055-a016-2c1aac2cf20b\") " pod="openstack/memcached-0" Feb 23 13:25:10 crc kubenswrapper[4851]: I0223 13:25:10.938942 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 13:25:11 crc kubenswrapper[4851]: I0223 13:25:11.413769 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" event={"ID":"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5","Type":"ContainerStarted","Data":"47f822033880a4e08faa83a40fbf2eb6997745d8653ed34c528cb436249e2901"} Feb 23 13:25:12 crc kubenswrapper[4851]: I0223 13:25:12.747998 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 13:25:12 crc kubenswrapper[4851]: I0223 13:25:12.751110 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 13:25:12 crc kubenswrapper[4851]: I0223 13:25:12.752629 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-7x6zn" Feb 23 13:25:12 crc kubenswrapper[4851]: I0223 13:25:12.757265 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 13:25:12 crc kubenswrapper[4851]: I0223 13:25:12.869053 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x8ww\" (UniqueName: \"kubernetes.io/projected/aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3-kube-api-access-5x8ww\") pod \"kube-state-metrics-0\" (UID: \"aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3\") " pod="openstack/kube-state-metrics-0" Feb 23 13:25:12 crc kubenswrapper[4851]: I0223 13:25:12.970525 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x8ww\" (UniqueName: \"kubernetes.io/projected/aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3-kube-api-access-5x8ww\") pod \"kube-state-metrics-0\" (UID: \"aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3\") " pod="openstack/kube-state-metrics-0" Feb 23 13:25:12 crc kubenswrapper[4851]: I0223 13:25:12.989494 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x8ww\" (UniqueName: \"kubernetes.io/projected/aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3-kube-api-access-5x8ww\") pod \"kube-state-metrics-0\" (UID: \"aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3\") " pod="openstack/kube-state-metrics-0" Feb 23 13:25:13 crc kubenswrapper[4851]: I0223 13:25:13.071912 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.822101 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2rf22"] Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.823678 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2rf22" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.828991 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.829242 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.829245 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-gs8mk" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.831245 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2rf22"] Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.838262 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-42p6n"] Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.842010 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.873259 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-42p6n"] Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920128 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvsqs\" (UniqueName: \"kubernetes.io/projected/f366da8b-d0d3-411e-afec-53af288b0c42-kube-api-access-wvsqs\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920171 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f366da8b-d0d3-411e-afec-53af288b0c42-combined-ca-bundle\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920195 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-var-log\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920215 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f366da8b-d0d3-411e-afec-53af288b0c42-var-run-ovn\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920241 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-var-lib\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920260 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f366da8b-d0d3-411e-afec-53af288b0c42-scripts\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920281 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f366da8b-d0d3-411e-afec-53af288b0c42-var-run\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920301 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-var-run\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920412 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f366da8b-d0d3-411e-afec-53af288b0c42-ovn-controller-tls-certs\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920441 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-etc-ovs\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920469 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbz6\" (UniqueName: \"kubernetes.io/projected/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-kube-api-access-jkbz6\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920495 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f366da8b-d0d3-411e-afec-53af288b0c42-var-log-ovn\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:15 crc kubenswrapper[4851]: I0223 13:25:15.920530 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-scripts\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023089 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-scripts\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023157 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvsqs\" (UniqueName: \"kubernetes.io/projected/f366da8b-d0d3-411e-afec-53af288b0c42-kube-api-access-wvsqs\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023182 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f366da8b-d0d3-411e-afec-53af288b0c42-combined-ca-bundle\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023205 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-var-log\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023224 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f366da8b-d0d3-411e-afec-53af288b0c42-var-run-ovn\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023260 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f366da8b-d0d3-411e-afec-53af288b0c42-scripts\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023276 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-var-lib\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023304 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f366da8b-d0d3-411e-afec-53af288b0c42-var-run\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023340 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-var-run\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023378 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f366da8b-d0d3-411e-afec-53af288b0c42-ovn-controller-tls-certs\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023410 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-etc-ovs\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023429 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbz6\" (UniqueName: \"kubernetes.io/projected/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-kube-api-access-jkbz6\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023450 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f366da8b-d0d3-411e-afec-53af288b0c42-var-log-ovn\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.023966 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-var-lib\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.024046 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f366da8b-d0d3-411e-afec-53af288b0c42-var-log-ovn\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.025032 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-etc-ovs\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.025445 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f366da8b-d0d3-411e-afec-53af288b0c42-var-run\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.025523 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-var-log\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.026162 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-scripts\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.026275 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f366da8b-d0d3-411e-afec-53af288b0c42-var-run-ovn\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.027851 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-var-run\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.028931 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f366da8b-d0d3-411e-afec-53af288b0c42-scripts\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.033896 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f366da8b-d0d3-411e-afec-53af288b0c42-ovn-controller-tls-certs\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.048703 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvsqs\" (UniqueName: \"kubernetes.io/projected/f366da8b-d0d3-411e-afec-53af288b0c42-kube-api-access-wvsqs\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.048828 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbz6\" (UniqueName: \"kubernetes.io/projected/d88acd5e-87c7-4b36-9aad-d20d44b7d0bf-kube-api-access-jkbz6\") pod \"ovn-controller-ovs-42p6n\" (UID: \"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf\") " pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.048956 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f366da8b-d0d3-411e-afec-53af288b0c42-combined-ca-bundle\") pod \"ovn-controller-2rf22\" (UID: \"f366da8b-d0d3-411e-afec-53af288b0c42\") " pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.140645 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2rf22" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.174424 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.710649 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.712959 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.716677 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.718570 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.718710 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.718825 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.719310 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8vv7m" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.727963 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.732263 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6c7eb0b-bab9-47af-b0e9-fd539479e252-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.732315 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzvm\" (UniqueName: \"kubernetes.io/projected/d6c7eb0b-bab9-47af-b0e9-fd539479e252-kube-api-access-lvzvm\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.732402 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.732419 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6c7eb0b-bab9-47af-b0e9-fd539479e252-config\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.732435 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d6c7eb0b-bab9-47af-b0e9-fd539479e252-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.732454 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c7eb0b-bab9-47af-b0e9-fd539479e252-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.732473 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c7eb0b-bab9-47af-b0e9-fd539479e252-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.732490 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c7eb0b-bab9-47af-b0e9-fd539479e252-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.835535 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzvm\" (UniqueName: \"kubernetes.io/projected/d6c7eb0b-bab9-47af-b0e9-fd539479e252-kube-api-access-lvzvm\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.835613 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.835635 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6c7eb0b-bab9-47af-b0e9-fd539479e252-config\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.835654 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d6c7eb0b-bab9-47af-b0e9-fd539479e252-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.835698 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c7eb0b-bab9-47af-b0e9-fd539479e252-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.835718 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c7eb0b-bab9-47af-b0e9-fd539479e252-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.835737 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c7eb0b-bab9-47af-b0e9-fd539479e252-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.835842 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6c7eb0b-bab9-47af-b0e9-fd539479e252-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.837445 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6c7eb0b-bab9-47af-b0e9-fd539479e252-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.837489 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d6c7eb0b-bab9-47af-b0e9-fd539479e252-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.838767 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.840564 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c7eb0b-bab9-47af-b0e9-fd539479e252-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.841163 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c7eb0b-bab9-47af-b0e9-fd539479e252-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.841434 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6c7eb0b-bab9-47af-b0e9-fd539479e252-config\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.841905 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c7eb0b-bab9-47af-b0e9-fd539479e252-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.852413 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzvm\" (UniqueName: \"kubernetes.io/projected/d6c7eb0b-bab9-47af-b0e9-fd539479e252-kube-api-access-lvzvm\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:16 crc kubenswrapper[4851]: I0223 13:25:16.865209 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"d6c7eb0b-bab9-47af-b0e9-fd539479e252\") " pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:17 crc kubenswrapper[4851]: I0223 13:25:17.042522 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:18 crc kubenswrapper[4851]: E0223 13:25:18.501555 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 13:25:18 crc kubenswrapper[4851]: E0223 13:25:18.501945 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6gg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-t8hl9_openstack(2a5341db-3c12-4a17-8f03-78c8dad9379d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:25:18 crc kubenswrapper[4851]: E0223 13:25:18.503152 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" podUID="2a5341db-3c12-4a17-8f03-78c8dad9379d" Feb 23 13:25:18 crc kubenswrapper[4851]: E0223 13:25:18.509616 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 13:25:18 crc kubenswrapper[4851]: E0223 13:25:18.509769 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4fwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-652f5_openstack(86e19e7e-3c5b-42b4-a27c-070eb1e3d68b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:25:18 crc kubenswrapper[4851]: E0223 13:25:18.510944 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" podUID="86e19e7e-3c5b-42b4-a27c-070eb1e3d68b" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.160616 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 13:25:19 crc kubenswrapper[4851]: W0223 13:25:19.171202 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa510cc6_f2ed_4aa7_81f4_61b2f5b4f2c3.slice/crio-ec1a5785963f6ac8a8c5b6d828e9e4a7a4248257dbd748e6dec03bd76ebbf4e4 WatchSource:0}: Error finding container ec1a5785963f6ac8a8c5b6d828e9e4a7a4248257dbd748e6dec03bd76ebbf4e4: Status 404 returned error can't find the container with id ec1a5785963f6ac8a8c5b6d828e9e4a7a4248257dbd748e6dec03bd76ebbf4e4 Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.174295 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.186613 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:25:19 crc kubenswrapper[4851]: W0223 13:25:19.191393 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb3d0e2c_9427_4585_8f01_0e1640feca9a.slice/crio-b8e59d5893f650e05b0a2fe7d671e70824f48710887aeca2135b291514c4cefd WatchSource:0}: Error finding container b8e59d5893f650e05b0a2fe7d671e70824f48710887aeca2135b291514c4cefd: Status 404 returned error can't find the container with id b8e59d5893f650e05b0a2fe7d671e70824f48710887aeca2135b291514c4cefd Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.200886 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.234205 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.296760 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.369778 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2rf22"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.381149 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 13:25:19 crc kubenswrapper[4851]: W0223 13:25:19.389586 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0d61403_fda9_4081_8c39_32ff86cc879c.slice/crio-238ed87d3c1a088213d2a4320ac62506a4179ba9795bc54fe2b338c8624a64d5 WatchSource:0}: Error finding container 238ed87d3c1a088213d2a4320ac62506a4179ba9795bc54fe2b338c8624a64d5: Status 404 returned error can't find the container with id 238ed87d3c1a088213d2a4320ac62506a4179ba9795bc54fe2b338c8624a64d5 Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.469829 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-42p6n"] Feb 23 13:25:19 crc kubenswrapper[4851]: W0223 13:25:19.472171 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd88acd5e_87c7_4b36_9aad_d20d44b7d0bf.slice/crio-310105b435d296fc9b8d67229d83a7ceffbbf979bb472016fcfa5fd014e6a94d WatchSource:0}: Error finding container 310105b435d296fc9b8d67229d83a7ceffbbf979bb472016fcfa5fd014e6a94d: Status 404 returned error can't find the container with id 310105b435d296fc9b8d67229d83a7ceffbbf979bb472016fcfa5fd014e6a94d Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.476209 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d6c7eb0b-bab9-47af-b0e9-fd539479e252","Type":"ContainerStarted","Data":"2a514573d9f34dc47221dc4a027084b5237e5f0c0ba6bdf06e05e2c709c9016a"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.477342 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2rf22" event={"ID":"f366da8b-d0d3-411e-afec-53af288b0c42","Type":"ContainerStarted","Data":"f17d9cb8300420cc4f8d8594bb5436082808cae5a9a89ae3adbdadfc99f7e905"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.478719 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46bf34c9-f0ec-4de6-ae40-fd334c23af27","Type":"ContainerStarted","Data":"0f79a0b45ef1390a938590bdc7a064da53f3e0d62d797a3c47035f43c10b4335"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.481649 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cb3d0e2c-9427-4585-8f01-0e1640feca9a","Type":"ContainerStarted","Data":"b8e59d5893f650e05b0a2fe7d671e70824f48710887aeca2135b291514c4cefd"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.483655 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3","Type":"ContainerStarted","Data":"ec1a5785963f6ac8a8c5b6d828e9e4a7a4248257dbd748e6dec03bd76ebbf4e4"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.495590 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec010635-96e5-448a-98c1-e458fd6f31ed","Type":"ContainerStarted","Data":"0cd77ddffdba57d2b601e67fc45cba449da7e88c993a8894579f73b50b00bfc3"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.512771 4851 generic.go:334] "Generic (PLEG): container finished" podID="a1a68000-b534-4591-90e2-a44e650c15e1" containerID="befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b" exitCode=0 Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.513495 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" event={"ID":"a1a68000-b534-4591-90e2-a44e650c15e1","Type":"ContainerDied","Data":"befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.513538 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-bnnms"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.515930 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.519278 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"aee353b8-8a37-4055-a016-2c1aac2cf20b","Type":"ContainerStarted","Data":"0f382b2d828ebd22e13ea6f6449438cbac11d40579f3b59974ad43958f106632"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.520583 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0d61403-fda9-4081-8c39-32ff86cc879c","Type":"ContainerStarted","Data":"238ed87d3c1a088213d2a4320ac62506a4179ba9795bc54fe2b338c8624a64d5"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.522397 4851 generic.go:334] "Generic (PLEG): container finished" podID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" containerID="051df9e826a033ef678a5a9aec0ae5ce9f850ea61ba8112e165d5f7ee3a03a28" exitCode=0 Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.522554 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" event={"ID":"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5","Type":"ContainerDied","Data":"051df9e826a033ef678a5a9aec0ae5ce9f850ea61ba8112e165d5f7ee3a03a28"} Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.527097 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.530195 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bnnms"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.591366 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-combined-ca-bundle\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.591411 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.591523 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-config\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.591553 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snhr6\" (UniqueName: \"kubernetes.io/projected/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-kube-api-access-snhr6\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.591576 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-ovn-rundir\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.591649 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-ovs-rundir\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.692618 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-config\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.692691 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snhr6\" (UniqueName: \"kubernetes.io/projected/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-kube-api-access-snhr6\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.692726 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-ovn-rundir\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.692786 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-ovs-rundir\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.692837 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-combined-ca-bundle\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.692862 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.693180 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-ovs-rundir\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.693513 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-config\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.695033 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-ovn-rundir\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.699289 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.699561 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-combined-ca-bundle\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.714146 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snhr6\" (UniqueName: \"kubernetes.io/projected/a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d-kube-api-access-snhr6\") pod \"ovn-controller-metrics-bnnms\" (UID: \"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d\") " pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.740461 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rhthr"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.782439 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5lfjs"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.784028 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.796929 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-config\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.796998 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.797186 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.797254 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qwn\" (UniqueName: \"kubernetes.io/projected/6f911749-26ea-46ff-b63f-105bcd92c2b8-kube-api-access-z5qwn\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.803680 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.836412 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5lfjs"] Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.898922 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qwn\" (UniqueName: \"kubernetes.io/projected/6f911749-26ea-46ff-b63f-105bcd92c2b8-kube-api-access-z5qwn\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.899283 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-config\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.899312 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.899462 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.900300 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.900459 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.900775 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-config\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.920872 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qwn\" (UniqueName: \"kubernetes.io/projected/6f911749-26ea-46ff-b63f-105bcd92c2b8-kube-api-access-z5qwn\") pod \"dnsmasq-dns-7fd796d7df-5lfjs\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:19 crc kubenswrapper[4851]: I0223 13:25:19.937714 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bnnms" Feb 23 13:25:20 crc kubenswrapper[4851]: E0223 13:25:20.004927 4851 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 23 13:25:20 crc kubenswrapper[4851]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/a1a68000-b534-4591-90e2-a44e650c15e1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 23 13:25:20 crc kubenswrapper[4851]: > podSandboxID="611ed65f240d5b29511ca894ded4fe57b8ee79e92421a1dd1ba4f1d226863c3c" Feb 23 13:25:20 crc kubenswrapper[4851]: E0223 13:25:20.005095 4851 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 13:25:20 crc kubenswrapper[4851]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwxt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-9xb8s_openstack(a1a68000-b534-4591-90e2-a44e650c15e1): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/a1a68000-b534-4591-90e2-a44e650c15e1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 23 13:25:20 crc kubenswrapper[4851]: > logger="UnhandledError" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.006089 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:20 crc kubenswrapper[4851]: E0223 13:25:20.006477 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/a1a68000-b534-4591-90e2-a44e650c15e1/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" podUID="a1a68000-b534-4591-90e2-a44e650c15e1" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.035303 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.127258 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.203624 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4fwp\" (UniqueName: \"kubernetes.io/projected/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-kube-api-access-q4fwp\") pod \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\" (UID: \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\") " Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.203674 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-config\") pod \"2a5341db-3c12-4a17-8f03-78c8dad9379d\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.203758 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-dns-svc\") pod \"2a5341db-3c12-4a17-8f03-78c8dad9379d\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.203792 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-config\") pod \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\" (UID: \"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b\") " Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.203846 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6gg2\" (UniqueName: \"kubernetes.io/projected/2a5341db-3c12-4a17-8f03-78c8dad9379d-kube-api-access-b6gg2\") pod \"2a5341db-3c12-4a17-8f03-78c8dad9379d\" (UID: \"2a5341db-3c12-4a17-8f03-78c8dad9379d\") " Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.204342 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a5341db-3c12-4a17-8f03-78c8dad9379d" (UID: "2a5341db-3c12-4a17-8f03-78c8dad9379d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.204391 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-config" (OuterVolumeSpecName: "config") pod "86e19e7e-3c5b-42b4-a27c-070eb1e3d68b" (UID: "86e19e7e-3c5b-42b4-a27c-070eb1e3d68b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.204391 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-config" (OuterVolumeSpecName: "config") pod "2a5341db-3c12-4a17-8f03-78c8dad9379d" (UID: "2a5341db-3c12-4a17-8f03-78c8dad9379d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.208479 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5341db-3c12-4a17-8f03-78c8dad9379d-kube-api-access-b6gg2" (OuterVolumeSpecName: "kube-api-access-b6gg2") pod "2a5341db-3c12-4a17-8f03-78c8dad9379d" (UID: "2a5341db-3c12-4a17-8f03-78c8dad9379d"). InnerVolumeSpecName "kube-api-access-b6gg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.208912 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-kube-api-access-q4fwp" (OuterVolumeSpecName: "kube-api-access-q4fwp") pod "86e19e7e-3c5b-42b4-a27c-070eb1e3d68b" (UID: "86e19e7e-3c5b-42b4-a27c-070eb1e3d68b"). InnerVolumeSpecName "kube-api-access-q4fwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.305520 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.305557 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4fwp\" (UniqueName: \"kubernetes.io/projected/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-kube-api-access-q4fwp\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.305573 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a5341db-3c12-4a17-8f03-78c8dad9379d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.305601 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.305612 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6gg2\" (UniqueName: \"kubernetes.io/projected/2a5341db-3c12-4a17-8f03-78c8dad9379d-kube-api-access-b6gg2\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.422743 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.423999 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.426944 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.427220 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.427416 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.428014 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-fhrx8" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.441321 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.535393 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-42p6n" event={"ID":"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf","Type":"ContainerStarted","Data":"310105b435d296fc9b8d67229d83a7ceffbbf979bb472016fcfa5fd014e6a94d"} Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.537868 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" event={"ID":"86e19e7e-3c5b-42b4-a27c-070eb1e3d68b","Type":"ContainerDied","Data":"2cb3a2bd17af1aa78ceaf1aece8e25fc3d811db8a45dc42f223a801fae302b52"} Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.537957 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-652f5" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.541963 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" event={"ID":"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5","Type":"ContainerStarted","Data":"84f55d2df3f3afe221fc3e7b9dedad73bd011b0b44b207008c635c4effdfcd04"} Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.542038 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" podUID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" containerName="dnsmasq-dns" containerID="cri-o://84f55d2df3f3afe221fc3e7b9dedad73bd011b0b44b207008c635c4effdfcd04" gracePeriod=10 Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.542094 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.544061 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" event={"ID":"2a5341db-3c12-4a17-8f03-78c8dad9379d","Type":"ContainerDied","Data":"174d8853df394850427c78c77b8696fb11dcbdec57a02404baace78002e86925"} Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.544084 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-t8hl9" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.587853 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" podStartSLOduration=6.805290208 podStartE2EDuration="14.587838208s" podCreationTimestamp="2026-02-23 13:25:06 +0000 UTC" firstStartedPulling="2026-02-23 13:25:10.832512847 +0000 UTC m=+1065.514216525" lastFinishedPulling="2026-02-23 13:25:18.615060847 +0000 UTC m=+1073.296764525" observedRunningTime="2026-02-23 13:25:20.582026043 +0000 UTC m=+1075.263729721" watchObservedRunningTime="2026-02-23 13:25:20.587838208 +0000 UTC m=+1075.269541886" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.615346 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f10652-af07-4024-b8b6-91d8e8974144-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.615696 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68f10652-af07-4024-b8b6-91d8e8974144-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.615720 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f10652-af07-4024-b8b6-91d8e8974144-config\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.615760 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f10652-af07-4024-b8b6-91d8e8974144-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.615779 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.615906 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f10652-af07-4024-b8b6-91d8e8974144-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.615956 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx2jd\" (UniqueName: \"kubernetes.io/projected/68f10652-af07-4024-b8b6-91d8e8974144-kube-api-access-rx2jd\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.615987 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68f10652-af07-4024-b8b6-91d8e8974144-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.625438 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-652f5"] Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.634002 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-652f5"] Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.685778 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t8hl9"] Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.698417 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-t8hl9"] Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.717824 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f10652-af07-4024-b8b6-91d8e8974144-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.717892 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68f10652-af07-4024-b8b6-91d8e8974144-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.717920 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f10652-af07-4024-b8b6-91d8e8974144-config\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.717976 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f10652-af07-4024-b8b6-91d8e8974144-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.717993 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.718075 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f10652-af07-4024-b8b6-91d8e8974144-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.718100 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx2jd\" (UniqueName: \"kubernetes.io/projected/68f10652-af07-4024-b8b6-91d8e8974144-kube-api-access-rx2jd\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.718117 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68f10652-af07-4024-b8b6-91d8e8974144-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.718436 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.719866 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/68f10652-af07-4024-b8b6-91d8e8974144-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.719953 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68f10652-af07-4024-b8b6-91d8e8974144-config\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.721647 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/68f10652-af07-4024-b8b6-91d8e8974144-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.735201 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f10652-af07-4024-b8b6-91d8e8974144-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.739983 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f10652-af07-4024-b8b6-91d8e8974144-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.741461 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.742285 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx2jd\" (UniqueName: \"kubernetes.io/projected/68f10652-af07-4024-b8b6-91d8e8974144-kube-api-access-rx2jd\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:20 crc kubenswrapper[4851]: I0223 13:25:20.755640 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/68f10652-af07-4024-b8b6-91d8e8974144-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"68f10652-af07-4024-b8b6-91d8e8974144\") " pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:21 crc kubenswrapper[4851]: I0223 13:25:21.048172 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:21 crc kubenswrapper[4851]: I0223 13:25:21.556549 4851 generic.go:334] "Generic (PLEG): container finished" podID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" containerID="84f55d2df3f3afe221fc3e7b9dedad73bd011b0b44b207008c635c4effdfcd04" exitCode=0 Feb 23 13:25:21 crc kubenswrapper[4851]: I0223 13:25:21.556588 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" event={"ID":"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5","Type":"ContainerDied","Data":"84f55d2df3f3afe221fc3e7b9dedad73bd011b0b44b207008c635c4effdfcd04"} Feb 23 13:25:21 crc kubenswrapper[4851]: I0223 13:25:21.979473 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5341db-3c12-4a17-8f03-78c8dad9379d" path="/var/lib/kubelet/pods/2a5341db-3c12-4a17-8f03-78c8dad9379d/volumes" Feb 23 13:25:21 crc kubenswrapper[4851]: I0223 13:25:21.980462 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e19e7e-3c5b-42b4-a27c-070eb1e3d68b" path="/var/lib/kubelet/pods/86e19e7e-3c5b-42b4-a27c-070eb1e3d68b/volumes" Feb 23 13:25:22 crc kubenswrapper[4851]: I0223 13:25:22.202944 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bnnms"] Feb 23 13:25:24 crc kubenswrapper[4851]: I0223 13:25:24.581963 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bnnms" event={"ID":"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d","Type":"ContainerStarted","Data":"1813df29580a6ff1c7297c5d59fd5c56fe031cf919ec217ce3c77b49b2d4cfca"} Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.460853 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.585477 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-config\") pod \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.585536 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnlqt\" (UniqueName: \"kubernetes.io/projected/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-kube-api-access-xnlqt\") pod \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.585658 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-dns-svc\") pod \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\" (UID: \"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5\") " Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.589950 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-kube-api-access-xnlqt" (OuterVolumeSpecName: "kube-api-access-xnlqt") pod "9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" (UID: "9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5"). InnerVolumeSpecName "kube-api-access-xnlqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.617678 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" (UID: "9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.621654 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-config" (OuterVolumeSpecName: "config") pod "9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" (UID: "9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.642212 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" event={"ID":"9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5","Type":"ContainerDied","Data":"47f822033880a4e08faa83a40fbf2eb6997745d8653ed34c528cb436249e2901"} Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.642256 4851 scope.go:117] "RemoveContainer" containerID="84f55d2df3f3afe221fc3e7b9dedad73bd011b0b44b207008c635c4effdfcd04" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.642374 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.672865 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rhthr"] Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.678125 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-rhthr"] Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.687611 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.687648 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnlqt\" (UniqueName: \"kubernetes.io/projected/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-kube-api-access-xnlqt\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.687660 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:31 crc kubenswrapper[4851]: E0223 13:25:31.785665 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 23 13:25:31 crc kubenswrapper[4851]: E0223 13:25:31.785821 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndbh667h9bh9fh57bh557h64fh6h6bh647h5bh9fh648h549hc8hd5h7dh577h667hb8h55fh545h546h666h59fh565h684h5f9hb6h558h5d8h9bq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wvsqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-2rf22_openstack(f366da8b-d0d3-411e-afec-53af288b0c42): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:25:31 crc kubenswrapper[4851]: E0223 13:25:31.787074 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-2rf22" podUID="f366da8b-d0d3-411e-afec-53af288b0c42" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.861160 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-rhthr" podUID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: i/o timeout" Feb 23 13:25:31 crc kubenswrapper[4851]: I0223 13:25:31.984833 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" path="/var/lib/kubelet/pods/9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5/volumes" Feb 23 13:25:32 crc kubenswrapper[4851]: I0223 13:25:32.344240 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5lfjs"] Feb 23 13:25:32 crc kubenswrapper[4851]: I0223 13:25:32.591511 4851 scope.go:117] "RemoveContainer" containerID="051df9e826a033ef678a5a9aec0ae5ce9f850ea61ba8112e165d5f7ee3a03a28" Feb 23 13:25:32 crc kubenswrapper[4851]: W0223 13:25:32.595440 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f911749_26ea_46ff_b63f_105bcd92c2b8.slice/crio-6b5eff6f90b4ab459f4b72929ae95de9ba09db4e5ef4bf56b438955b417a882c WatchSource:0}: Error finding container 6b5eff6f90b4ab459f4b72929ae95de9ba09db4e5ef4bf56b438955b417a882c: Status 404 returned error can't find the container with id 6b5eff6f90b4ab459f4b72929ae95de9ba09db4e5ef4bf56b438955b417a882c Feb 23 13:25:32 crc kubenswrapper[4851]: I0223 13:25:32.682550 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" event={"ID":"6f911749-26ea-46ff-b63f-105bcd92c2b8","Type":"ContainerStarted","Data":"6b5eff6f90b4ab459f4b72929ae95de9ba09db4e5ef4bf56b438955b417a882c"} Feb 23 13:25:32 crc kubenswrapper[4851]: E0223 13:25:32.687357 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-2rf22" podUID="f366da8b-d0d3-411e-afec-53af288b0c42" Feb 23 13:25:33 crc kubenswrapper[4851]: I0223 13:25:33.311748 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 13:25:33 crc kubenswrapper[4851]: E0223 13:25:33.429926 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 23 13:25:33 crc kubenswrapper[4851]: E0223 13:25:33.429968 4851 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 23 13:25:33 crc kubenswrapper[4851]: E0223 13:25:33.430305 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5x8ww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 13:25:33 crc kubenswrapper[4851]: E0223 13:25:33.431610 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3" Feb 23 13:25:33 crc kubenswrapper[4851]: I0223 13:25:33.440729 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:25:33 crc kubenswrapper[4851]: I0223 13:25:33.691423 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"68f10652-af07-4024-b8b6-91d8e8974144","Type":"ContainerStarted","Data":"ea7cfd9f27026346242a77e99dbc71f3a6296aa6539229606cacba334d7ceed6"} Feb 23 13:25:33 crc kubenswrapper[4851]: E0223 13:25:33.694492 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3" Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.701934 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0d61403-fda9-4081-8c39-32ff86cc879c","Type":"ContainerStarted","Data":"24f3798c6dcf6845423a63c25d76e5bc2d2840766534204761e99c6d1f06512d"} Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.704953 4851 generic.go:334] "Generic (PLEG): container finished" podID="6f911749-26ea-46ff-b63f-105bcd92c2b8" containerID="ae52dfb893714d220829e795fa56e055ff1e5fc0c67933d0c69ae12738a94fa2" exitCode=0 Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.705079 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" event={"ID":"6f911749-26ea-46ff-b63f-105bcd92c2b8","Type":"ContainerDied","Data":"ae52dfb893714d220829e795fa56e055ff1e5fc0c67933d0c69ae12738a94fa2"} Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.708637 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-42p6n" event={"ID":"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf","Type":"ContainerStarted","Data":"fe8a25be842b780588239bd6e975d59502326f56ebb7ffe8aed11eb8376ad5eb"} Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.715263 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cb3d0e2c-9427-4585-8f01-0e1640feca9a","Type":"ContainerStarted","Data":"6d27a5631bf6c15a02c3ba4fa21de2506ee5088ecbff5b4535c587d23d0c0155"} Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.717190 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d6c7eb0b-bab9-47af-b0e9-fd539479e252","Type":"ContainerStarted","Data":"88c3b4dbc2b0a1d0f8b7080b0422ae2d86029967bc6bbac457e648d9cb8c1c73"} Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.721069 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec010635-96e5-448a-98c1-e458fd6f31ed","Type":"ContainerStarted","Data":"e273fe812a61abead0849f007f3f26e978b68df2e0939cf7a163e3001984bc7a"} Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.730902 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" event={"ID":"a1a68000-b534-4591-90e2-a44e650c15e1","Type":"ContainerStarted","Data":"ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d"} Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.731711 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.742421 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"aee353b8-8a37-4055-a016-2c1aac2cf20b","Type":"ContainerStarted","Data":"285ededcf1072b0a16f032aa3ae8614565208fe20e660e70f8dde5760b9e5c3e"} Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.743229 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.749086 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bnnms" event={"ID":"a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d","Type":"ContainerStarted","Data":"800bde5cde2433643f23149b251073ce1b7b62c38128516873b860fa98d5c457"} Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.870379 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" podStartSLOduration=17.321195353 podStartE2EDuration="28.87036435s" podCreationTimestamp="2026-02-23 13:25:06 +0000 UTC" firstStartedPulling="2026-02-23 13:25:07.059681094 +0000 UTC m=+1061.741384772" lastFinishedPulling="2026-02-23 13:25:18.608850091 +0000 UTC m=+1073.290553769" observedRunningTime="2026-02-23 13:25:34.863435374 +0000 UTC m=+1089.545139062" watchObservedRunningTime="2026-02-23 13:25:34.87036435 +0000 UTC m=+1089.552068028" Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.887268 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.055749849 podStartE2EDuration="24.887251278s" podCreationTimestamp="2026-02-23 13:25:10 +0000 UTC" firstStartedPulling="2026-02-23 13:25:19.203092398 +0000 UTC m=+1073.884796076" lastFinishedPulling="2026-02-23 13:25:32.034593827 +0000 UTC m=+1086.716297505" observedRunningTime="2026-02-23 13:25:34.886107436 +0000 UTC m=+1089.567811124" watchObservedRunningTime="2026-02-23 13:25:34.887251278 +0000 UTC m=+1089.568954956" Feb 23 13:25:34 crc kubenswrapper[4851]: I0223 13:25:34.918723 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-bnnms" podStartSLOduration=7.196052181 podStartE2EDuration="15.918674397s" podCreationTimestamp="2026-02-23 13:25:19 +0000 UTC" firstStartedPulling="2026-02-23 13:25:24.326002289 +0000 UTC m=+1079.007705967" lastFinishedPulling="2026-02-23 13:25:33.048624505 +0000 UTC m=+1087.730328183" observedRunningTime="2026-02-23 13:25:34.910267409 +0000 UTC m=+1089.591971097" watchObservedRunningTime="2026-02-23 13:25:34.918674397 +0000 UTC m=+1089.600378065" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.337823 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9xb8s"] Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.364293 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-kkz4f"] Feb 23 13:25:35 crc kubenswrapper[4851]: E0223 13:25:35.364619 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" containerName="dnsmasq-dns" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.364642 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" containerName="dnsmasq-dns" Feb 23 13:25:35 crc kubenswrapper[4851]: E0223 13:25:35.364660 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" containerName="init" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.364668 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" containerName="init" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.364866 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3bbcbe-982d-4821-bdf1-cfdbc84b42d5" containerName="dnsmasq-dns" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.365859 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.367704 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.381755 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-kkz4f"] Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.457514 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.457569 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.457664 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-config\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.457727 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzcl2\" (UniqueName: \"kubernetes.io/projected/ef1cf674-79a9-49b6-b482-8a04787e511e-kube-api-access-zzcl2\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.457777 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.559105 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzcl2\" (UniqueName: \"kubernetes.io/projected/ef1cf674-79a9-49b6-b482-8a04787e511e-kube-api-access-zzcl2\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.559177 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.559223 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.559245 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.559316 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-config\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.560157 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.560237 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.560312 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-config\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.560472 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.585511 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzcl2\" (UniqueName: \"kubernetes.io/projected/ef1cf674-79a9-49b6-b482-8a04787e511e-kube-api-access-zzcl2\") pod \"dnsmasq-dns-86db49b7ff-kkz4f\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.680991 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.760629 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"68f10652-af07-4024-b8b6-91d8e8974144","Type":"ContainerStarted","Data":"8212cc7c30a2af2e07e23f8a6c0a6149f6d5c597de2b87c553748a062a2b8034"} Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.760671 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"68f10652-af07-4024-b8b6-91d8e8974144","Type":"ContainerStarted","Data":"2f78d0d1ef1d95b20f37579c7142d0598b84b79c99807b0940071ff86b67f2c6"} Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.766264 4851 generic.go:334] "Generic (PLEG): container finished" podID="d88acd5e-87c7-4b36-9aad-d20d44b7d0bf" containerID="fe8a25be842b780588239bd6e975d59502326f56ebb7ffe8aed11eb8376ad5eb" exitCode=0 Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.766351 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-42p6n" event={"ID":"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf","Type":"ContainerDied","Data":"fe8a25be842b780588239bd6e975d59502326f56ebb7ffe8aed11eb8376ad5eb"} Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.777516 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" event={"ID":"6f911749-26ea-46ff-b63f-105bcd92c2b8","Type":"ContainerStarted","Data":"35160ece8042e45142d86cd3a357f85302d0b42389996e7785a69779f925100b"} Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.777665 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.779356 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"d6c7eb0b-bab9-47af-b0e9-fd539479e252","Type":"ContainerStarted","Data":"436ab2204527b3d6e832f05abe5f71376a0c420ab9939ec2b9e9e16c7e36ce53"} Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.783021 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46bf34c9-f0ec-4de6-ae40-fd334c23af27","Type":"ContainerStarted","Data":"de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b"} Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.793547 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=15.697609181 podStartE2EDuration="16.793531616s" podCreationTimestamp="2026-02-23 13:25:19 +0000 UTC" firstStartedPulling="2026-02-23 13:25:33.440454893 +0000 UTC m=+1088.122158571" lastFinishedPulling="2026-02-23 13:25:34.536377318 +0000 UTC m=+1089.218081006" observedRunningTime="2026-02-23 13:25:35.784380697 +0000 UTC m=+1090.466084395" watchObservedRunningTime="2026-02-23 13:25:35.793531616 +0000 UTC m=+1090.475235294" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.835567 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" podStartSLOduration=16.835547255 podStartE2EDuration="16.835547255s" podCreationTimestamp="2026-02-23 13:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:25:35.821841967 +0000 UTC m=+1090.503545655" watchObservedRunningTime="2026-02-23 13:25:35.835547255 +0000 UTC m=+1090.517250933" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.851186 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.117744225 podStartE2EDuration="20.851169768s" podCreationTimestamp="2026-02-23 13:25:15 +0000 UTC" firstStartedPulling="2026-02-23 13:25:19.301144503 +0000 UTC m=+1073.982848181" lastFinishedPulling="2026-02-23 13:25:32.034570046 +0000 UTC m=+1086.716273724" observedRunningTime="2026-02-23 13:25:35.845900978 +0000 UTC m=+1090.527604656" watchObservedRunningTime="2026-02-23 13:25:35.851169768 +0000 UTC m=+1090.532873446" Feb 23 13:25:35 crc kubenswrapper[4851]: I0223 13:25:35.936730 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-kkz4f"] Feb 23 13:25:35 crc kubenswrapper[4851]: W0223 13:25:35.942236 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef1cf674_79a9_49b6_b482_8a04787e511e.slice/crio-3cd7cb3243fcfc77d592160cd0b8dc2ec9c8de61a1a3f82a783b437d334b8628 WatchSource:0}: Error finding container 3cd7cb3243fcfc77d592160cd0b8dc2ec9c8de61a1a3f82a783b437d334b8628: Status 404 returned error can't find the container with id 3cd7cb3243fcfc77d592160cd0b8dc2ec9c8de61a1a3f82a783b437d334b8628 Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.049322 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.049382 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.790558 4851 generic.go:334] "Generic (PLEG): container finished" podID="ef1cf674-79a9-49b6-b482-8a04787e511e" containerID="f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b" exitCode=0 Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.790852 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" event={"ID":"ef1cf674-79a9-49b6-b482-8a04787e511e","Type":"ContainerDied","Data":"f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b"} Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.790877 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" event={"ID":"ef1cf674-79a9-49b6-b482-8a04787e511e","Type":"ContainerStarted","Data":"3cd7cb3243fcfc77d592160cd0b8dc2ec9c8de61a1a3f82a783b437d334b8628"} Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.947250 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" podUID="a1a68000-b534-4591-90e2-a44e650c15e1" containerName="dnsmasq-dns" containerID="cri-o://ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d" gracePeriod=10 Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.948827 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-42p6n" event={"ID":"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf","Type":"ContainerStarted","Data":"827ae268f01b387da55dae41bdcf1cc8f84207ef612f3e948c14334c9f2c4eba"} Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.948865 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-42p6n" event={"ID":"d88acd5e-87c7-4b36-9aad-d20d44b7d0bf","Type":"ContainerStarted","Data":"9a4e0355a52de97077f363d92ccdce455dd14267e7054d79331d05ac3e7422cb"} Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.948878 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:36 crc kubenswrapper[4851]: I0223 13:25:36.948891 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.043064 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.268703 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.283004 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-42p6n" podStartSLOduration=9.712845607 podStartE2EDuration="22.282987218s" podCreationTimestamp="2026-02-23 13:25:15 +0000 UTC" firstStartedPulling="2026-02-23 13:25:19.477266528 +0000 UTC m=+1074.158970206" lastFinishedPulling="2026-02-23 13:25:32.047408139 +0000 UTC m=+1086.729111817" observedRunningTime="2026-02-23 13:25:37.00569371 +0000 UTC m=+1091.687397388" watchObservedRunningTime="2026-02-23 13:25:37.282987218 +0000 UTC m=+1091.964690896" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.449735 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-dns-svc\") pod \"a1a68000-b534-4591-90e2-a44e650c15e1\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.450176 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-config\") pod \"a1a68000-b534-4591-90e2-a44e650c15e1\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.450249 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwxt9\" (UniqueName: \"kubernetes.io/projected/a1a68000-b534-4591-90e2-a44e650c15e1-kube-api-access-nwxt9\") pod \"a1a68000-b534-4591-90e2-a44e650c15e1\" (UID: \"a1a68000-b534-4591-90e2-a44e650c15e1\") " Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.457607 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a68000-b534-4591-90e2-a44e650c15e1-kube-api-access-nwxt9" (OuterVolumeSpecName: "kube-api-access-nwxt9") pod "a1a68000-b534-4591-90e2-a44e650c15e1" (UID: "a1a68000-b534-4591-90e2-a44e650c15e1"). InnerVolumeSpecName "kube-api-access-nwxt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.485400 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1a68000-b534-4591-90e2-a44e650c15e1" (UID: "a1a68000-b534-4591-90e2-a44e650c15e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.486923 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-config" (OuterVolumeSpecName: "config") pod "a1a68000-b534-4591-90e2-a44e650c15e1" (UID: "a1a68000-b534-4591-90e2-a44e650c15e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.551769 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwxt9\" (UniqueName: \"kubernetes.io/projected/a1a68000-b534-4591-90e2-a44e650c15e1-kube-api-access-nwxt9\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.551808 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.551817 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1a68000-b534-4591-90e2-a44e650c15e1-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.957357 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" event={"ID":"ef1cf674-79a9-49b6-b482-8a04787e511e","Type":"ContainerStarted","Data":"9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032"} Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.957987 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.961034 4851 generic.go:334] "Generic (PLEG): container finished" podID="a1a68000-b534-4591-90e2-a44e650c15e1" containerID="ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d" exitCode=0 Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.961140 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" event={"ID":"a1a68000-b534-4591-90e2-a44e650c15e1","Type":"ContainerDied","Data":"ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d"} Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.961158 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.961182 4851 scope.go:117] "RemoveContainer" containerID="ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.961172 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9xb8s" event={"ID":"a1a68000-b534-4591-90e2-a44e650c15e1","Type":"ContainerDied","Data":"611ed65f240d5b29511ca894ded4fe57b8ee79e92421a1dd1ba4f1d226863c3c"} Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.980627 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" podStartSLOduration=2.980609321 podStartE2EDuration="2.980609321s" podCreationTimestamp="2026-02-23 13:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:25:37.975177617 +0000 UTC m=+1092.656881315" watchObservedRunningTime="2026-02-23 13:25:37.980609321 +0000 UTC m=+1092.662312999" Feb 23 13:25:37 crc kubenswrapper[4851]: I0223 13:25:37.981413 4851 scope.go:117] "RemoveContainer" containerID="befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b" Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.005017 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9xb8s"] Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.009166 4851 scope.go:117] "RemoveContainer" containerID="ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d" Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.011698 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9xb8s"] Feb 23 13:25:38 crc kubenswrapper[4851]: E0223 13:25:38.013942 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d\": container with ID starting with ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d not found: ID does not exist" containerID="ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d" Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.013971 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d"} err="failed to get container status \"ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d\": rpc error: code = NotFound desc = could not find container \"ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d\": container with ID starting with ada7695ed90e6327cf29ecadfbc620301c4fb3f9324d0d5e1f1a52f33f7a4c0d not found: ID does not exist" Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.013989 4851 scope.go:117] "RemoveContainer" containerID="befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b" Feb 23 13:25:38 crc kubenswrapper[4851]: E0223 13:25:38.014366 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b\": container with ID starting with befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b not found: ID does not exist" containerID="befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b" Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.014423 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b"} err="failed to get container status \"befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b\": rpc error: code = NotFound desc = could not find container \"befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b\": container with ID starting with befb10a202c6366c5372ef82a26dc652f44d73e163e5e75e039f4f494ecfea2b not found: ID does not exist" Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.042848 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.084510 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:38 crc kubenswrapper[4851]: E0223 13:25:38.243171 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0d61403_fda9_4081_8c39_32ff86cc879c.slice/crio-conmon-24f3798c6dcf6845423a63c25d76e5bc2d2840766534204761e99c6d1f06512d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb3d0e2c_9427_4585_8f01_0e1640feca9a.slice/crio-6d27a5631bf6c15a02c3ba4fa21de2506ee5088ecbff5b4535c587d23d0c0155.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.969565 4851 generic.go:334] "Generic (PLEG): container finished" podID="a0d61403-fda9-4081-8c39-32ff86cc879c" containerID="24f3798c6dcf6845423a63c25d76e5bc2d2840766534204761e99c6d1f06512d" exitCode=0 Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.969710 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0d61403-fda9-4081-8c39-32ff86cc879c","Type":"ContainerDied","Data":"24f3798c6dcf6845423a63c25d76e5bc2d2840766534204761e99c6d1f06512d"} Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.974441 4851 generic.go:334] "Generic (PLEG): container finished" podID="cb3d0e2c-9427-4585-8f01-0e1640feca9a" containerID="6d27a5631bf6c15a02c3ba4fa21de2506ee5088ecbff5b4535c587d23d0c0155" exitCode=0 Feb 23 13:25:38 crc kubenswrapper[4851]: I0223 13:25:38.975428 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cb3d0e2c-9427-4585-8f01-0e1640feca9a","Type":"ContainerDied","Data":"6d27a5631bf6c15a02c3ba4fa21de2506ee5088ecbff5b4535c587d23d0c0155"} Feb 23 13:25:39 crc kubenswrapper[4851]: I0223 13:25:39.098431 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:39 crc kubenswrapper[4851]: I0223 13:25:39.146968 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 23 13:25:39 crc kubenswrapper[4851]: I0223 13:25:39.977919 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a68000-b534-4591-90e2-a44e650c15e1" path="/var/lib/kubelet/pods/a1a68000-b534-4591-90e2-a44e650c15e1/volumes" Feb 23 13:25:39 crc kubenswrapper[4851]: I0223 13:25:39.982506 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a0d61403-fda9-4081-8c39-32ff86cc879c","Type":"ContainerStarted","Data":"b4a3495bc36320aaf7a1a5b17d0f8de0db607e8d8d333811592040928d3076f2"} Feb 23 13:25:39 crc kubenswrapper[4851]: I0223 13:25:39.984474 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cb3d0e2c-9427-4585-8f01-0e1640feca9a","Type":"ContainerStarted","Data":"d34a200a553632fe131d0376029ba3d3ac92316dc011fe82f25939d919ae89c0"} Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.031753 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.59808039 podStartE2EDuration="33.031734229s" podCreationTimestamp="2026-02-23 13:25:07 +0000 UTC" firstStartedPulling="2026-02-23 13:25:19.395503034 +0000 UTC m=+1074.077206712" lastFinishedPulling="2026-02-23 13:25:32.829156873 +0000 UTC m=+1087.510860551" observedRunningTime="2026-02-23 13:25:40.003055828 +0000 UTC m=+1094.684759516" watchObservedRunningTime="2026-02-23 13:25:40.031734229 +0000 UTC m=+1094.713437907" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.033041 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.033114 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.735632452 podStartE2EDuration="31.033106158s" podCreationTimestamp="2026-02-23 13:25:09 +0000 UTC" firstStartedPulling="2026-02-23 13:25:19.198850448 +0000 UTC m=+1073.880554136" lastFinishedPulling="2026-02-23 13:25:32.496324164 +0000 UTC m=+1087.178027842" observedRunningTime="2026-02-23 13:25:40.025914885 +0000 UTC m=+1094.707618573" watchObservedRunningTime="2026-02-23 13:25:40.033106158 +0000 UTC m=+1094.714809826" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.131514 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.187492 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 23 13:25:40 crc kubenswrapper[4851]: E0223 13:25:40.187862 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a68000-b534-4591-90e2-a44e650c15e1" containerName="dnsmasq-dns" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.187881 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a68000-b534-4591-90e2-a44e650c15e1" containerName="dnsmasq-dns" Feb 23 13:25:40 crc kubenswrapper[4851]: E0223 13:25:40.187912 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a68000-b534-4591-90e2-a44e650c15e1" containerName="init" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.187922 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a68000-b534-4591-90e2-a44e650c15e1" containerName="init" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.188106 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a68000-b534-4591-90e2-a44e650c15e1" containerName="dnsmasq-dns" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.189089 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.196349 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.196445 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zj2db" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.196669 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.196839 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.211517 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.299382 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.299459 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.299511 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-scripts\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.299561 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.299595 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.299631 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpsbh\" (UniqueName: \"kubernetes.io/projected/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-kube-api-access-kpsbh\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.299668 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-config\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.401344 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.401401 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.401455 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpsbh\" (UniqueName: \"kubernetes.io/projected/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-kube-api-access-kpsbh\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.401940 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.401641 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-config\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.402309 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.402358 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.402414 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-scripts\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.402794 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-config\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.403186 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-scripts\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.415171 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.415915 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.416039 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.418791 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpsbh\" (UniqueName: \"kubernetes.io/projected/a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2-kube-api-access-kpsbh\") pod \"ovn-northd-0\" (UID: \"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2\") " pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.527047 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.693728 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.693782 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:40 crc kubenswrapper[4851]: I0223 13:25:40.940130 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 23 13:25:41 crc kubenswrapper[4851]: I0223 13:25:41.032930 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 13:25:41 crc kubenswrapper[4851]: I0223 13:25:41.924807 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:25:41 crc kubenswrapper[4851]: I0223 13:25:41.925125 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:25:42 crc kubenswrapper[4851]: I0223 13:25:42.012368 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2","Type":"ContainerStarted","Data":"649b01e86d0cdc32afd5a94c624c379a6e093d7a55db9bbbc55e74ab00c0ee62"} Feb 23 13:25:42 crc kubenswrapper[4851]: I0223 13:25:42.996789 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-kkz4f"] Feb 23 13:25:42 crc kubenswrapper[4851]: I0223 13:25:42.998929 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" podUID="ef1cf674-79a9-49b6-b482-8a04787e511e" containerName="dnsmasq-dns" containerID="cri-o://9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032" gracePeriod=10 Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.000794 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.033506 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2","Type":"ContainerStarted","Data":"a1a655a02b8dbc477a623404e08df9a427d715e52ad67fca40ad9b497401c737"} Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.033552 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2","Type":"ContainerStarted","Data":"5487a9208f93a2383faed52264c250ac005344b566264d5ff196c0b72ebc26ae"} Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.034120 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.086459 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-29d8z"] Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.087685 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.101307 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-29d8z"] Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.107630 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.107454003 podStartE2EDuration="3.107606178s" podCreationTimestamp="2026-02-23 13:25:40 +0000 UTC" firstStartedPulling="2026-02-23 13:25:41.048713059 +0000 UTC m=+1095.730416737" lastFinishedPulling="2026-02-23 13:25:42.048865234 +0000 UTC m=+1096.730568912" observedRunningTime="2026-02-23 13:25:43.094494077 +0000 UTC m=+1097.776197785" watchObservedRunningTime="2026-02-23 13:25:43.107606178 +0000 UTC m=+1097.789309856" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.148927 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.151974 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-dns-svc\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.152132 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4s8f\" (UniqueName: \"kubernetes.io/projected/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-kube-api-access-f4s8f\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.152170 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.152262 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-config\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.256195 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.256267 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-dns-svc\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.256372 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.256397 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4s8f\" (UniqueName: \"kubernetes.io/projected/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-kube-api-access-f4s8f\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.256457 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-config\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.257497 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-config\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.259077 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.259166 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-dns-svc\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.259559 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.279629 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4s8f\" (UniqueName: \"kubernetes.io/projected/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-kube-api-access-f4s8f\") pod \"dnsmasq-dns-698758b865-29d8z\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.459000 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.480943 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.560602 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-nb\") pod \"ef1cf674-79a9-49b6-b482-8a04787e511e\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.560702 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-config\") pod \"ef1cf674-79a9-49b6-b482-8a04787e511e\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.560793 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-sb\") pod \"ef1cf674-79a9-49b6-b482-8a04787e511e\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.560883 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzcl2\" (UniqueName: \"kubernetes.io/projected/ef1cf674-79a9-49b6-b482-8a04787e511e-kube-api-access-zzcl2\") pod \"ef1cf674-79a9-49b6-b482-8a04787e511e\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.561015 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-dns-svc\") pod \"ef1cf674-79a9-49b6-b482-8a04787e511e\" (UID: \"ef1cf674-79a9-49b6-b482-8a04787e511e\") " Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.569635 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1cf674-79a9-49b6-b482-8a04787e511e-kube-api-access-zzcl2" (OuterVolumeSpecName: "kube-api-access-zzcl2") pod "ef1cf674-79a9-49b6-b482-8a04787e511e" (UID: "ef1cf674-79a9-49b6-b482-8a04787e511e"). InnerVolumeSpecName "kube-api-access-zzcl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.623044 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef1cf674-79a9-49b6-b482-8a04787e511e" (UID: "ef1cf674-79a9-49b6-b482-8a04787e511e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.625659 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef1cf674-79a9-49b6-b482-8a04787e511e" (UID: "ef1cf674-79a9-49b6-b482-8a04787e511e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.641156 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef1cf674-79a9-49b6-b482-8a04787e511e" (UID: "ef1cf674-79a9-49b6-b482-8a04787e511e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.650748 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-config" (OuterVolumeSpecName: "config") pod "ef1cf674-79a9-49b6-b482-8a04787e511e" (UID: "ef1cf674-79a9-49b6-b482-8a04787e511e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.663367 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.663396 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.663406 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.663415 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef1cf674-79a9-49b6-b482-8a04787e511e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.663424 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzcl2\" (UniqueName: \"kubernetes.io/projected/ef1cf674-79a9-49b6-b482-8a04787e511e-kube-api-access-zzcl2\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:43 crc kubenswrapper[4851]: I0223 13:25:43.986254 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-29d8z"] Feb 23 13:25:43 crc kubenswrapper[4851]: W0223 13:25:43.991904 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c03be1e_abc1_4289_b6dc_ba4b3ac70614.slice/crio-9fab3f06c2a503c612e232a4d1c511dafecd5e27da67591dcc6bc42f16ced5aa WatchSource:0}: Error finding container 9fab3f06c2a503c612e232a4d1c511dafecd5e27da67591dcc6bc42f16ced5aa: Status 404 returned error can't find the container with id 9fab3f06c2a503c612e232a4d1c511dafecd5e27da67591dcc6bc42f16ced5aa Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.047669 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-29d8z" event={"ID":"1c03be1e-abc1-4289-b6dc-ba4b3ac70614","Type":"ContainerStarted","Data":"9fab3f06c2a503c612e232a4d1c511dafecd5e27da67591dcc6bc42f16ced5aa"} Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.054855 4851 generic.go:334] "Generic (PLEG): container finished" podID="ef1cf674-79a9-49b6-b482-8a04787e511e" containerID="9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032" exitCode=0 Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.054956 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" event={"ID":"ef1cf674-79a9-49b6-b482-8a04787e511e","Type":"ContainerDied","Data":"9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032"} Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.055019 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" event={"ID":"ef1cf674-79a9-49b6-b482-8a04787e511e","Type":"ContainerDied","Data":"3cd7cb3243fcfc77d592160cd0b8dc2ec9c8de61a1a3f82a783b437d334b8628"} Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.055037 4851 scope.go:117] "RemoveContainer" containerID="9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.055370 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-kkz4f" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.082446 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-kkz4f"] Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.088902 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-kkz4f"] Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.093409 4851 scope.go:117] "RemoveContainer" containerID="f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.154810 4851 scope.go:117] "RemoveContainer" containerID="9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032" Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.155309 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032\": container with ID starting with 9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032 not found: ID does not exist" containerID="9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.155373 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032"} err="failed to get container status \"9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032\": rpc error: code = NotFound desc = could not find container \"9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032\": container with ID starting with 9409193aac56fe2b54b61399092c35771fb6b96ed8b26f52c41f5753f24db032 not found: ID does not exist" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.155400 4851 scope.go:117] "RemoveContainer" containerID="f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b" Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.155738 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b\": container with ID starting with f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b not found: ID does not exist" containerID="f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.155879 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b"} err="failed to get container status \"f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b\": rpc error: code = NotFound desc = could not find container \"f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b\": container with ID starting with f1cf4a1cd4440f6cafdf79dcf8e68c3a340d659fd6e5040a8645f6be6a10fb3b not found: ID does not exist" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.166635 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.166928 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1cf674-79a9-49b6-b482-8a04787e511e" containerName="dnsmasq-dns" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.166945 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1cf674-79a9-49b6-b482-8a04787e511e" containerName="dnsmasq-dns" Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.166967 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1cf674-79a9-49b6-b482-8a04787e511e" containerName="init" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.166974 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1cf674-79a9-49b6-b482-8a04787e511e" containerName="init" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.167126 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef1cf674-79a9-49b6-b482-8a04787e511e" containerName="dnsmasq-dns" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.173215 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.175686 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.176374 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.176543 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-jrg5h" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.177984 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.200795 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.274047 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3cd939-1e76-4a55-bb7b-614ae880e79c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.274095 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2lhb\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-kube-api-access-s2lhb\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.274134 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7e3cd939-1e76-4a55-bb7b-614ae880e79c-cache\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.274292 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.274456 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7e3cd939-1e76-4a55-bb7b-614ae880e79c-lock\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.274529 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.376057 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2lhb\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-kube-api-access-s2lhb\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.376128 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7e3cd939-1e76-4a55-bb7b-614ae880e79c-cache\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.376171 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.376226 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7e3cd939-1e76-4a55-bb7b-614ae880e79c-lock\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.376266 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.376320 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3cd939-1e76-4a55-bb7b-614ae880e79c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.376587 4851 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.376622 4851 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.376588 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.376668 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift podName:7e3cd939-1e76-4a55-bb7b-614ae880e79c nodeName:}" failed. No retries permitted until 2026-02-23 13:25:44.876650851 +0000 UTC m=+1099.558354529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift") pod "swift-storage-0" (UID: "7e3cd939-1e76-4a55-bb7b-614ae880e79c") : configmap "swift-ring-files" not found Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.376729 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7e3cd939-1e76-4a55-bb7b-614ae880e79c-lock\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.376852 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7e3cd939-1e76-4a55-bb7b-614ae880e79c-cache\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.391476 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3cd939-1e76-4a55-bb7b-614ae880e79c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.398277 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.404591 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2lhb\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-kube-api-access-s2lhb\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: I0223 13:25:44.882470 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.882696 4851 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.882728 4851 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:25:44 crc kubenswrapper[4851]: E0223 13:25:44.882786 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift podName:7e3cd939-1e76-4a55-bb7b-614ae880e79c nodeName:}" failed. No retries permitted until 2026-02-23 13:25:45.882768965 +0000 UTC m=+1100.564472643 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift") pod "swift-storage-0" (UID: "7e3cd939-1e76-4a55-bb7b-614ae880e79c") : configmap "swift-ring-files" not found Feb 23 13:25:45 crc kubenswrapper[4851]: I0223 13:25:45.062263 4851 generic.go:334] "Generic (PLEG): container finished" podID="1c03be1e-abc1-4289-b6dc-ba4b3ac70614" containerID="55773d1c15f561544ec1ce0e581cf7ab8ac6bdab69dcff565d5712486a9947e2" exitCode=0 Feb 23 13:25:45 crc kubenswrapper[4851]: I0223 13:25:45.063345 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-29d8z" event={"ID":"1c03be1e-abc1-4289-b6dc-ba4b3ac70614","Type":"ContainerDied","Data":"55773d1c15f561544ec1ce0e581cf7ab8ac6bdab69dcff565d5712486a9947e2"} Feb 23 13:25:45 crc kubenswrapper[4851]: I0223 13:25:45.067289 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2rf22" event={"ID":"f366da8b-d0d3-411e-afec-53af288b0c42","Type":"ContainerStarted","Data":"72c6d5dfd2289fbc81fe42be4abf67f5e84bc60598daad4db853fab0fe97d3a9"} Feb 23 13:25:45 crc kubenswrapper[4851]: I0223 13:25:45.067513 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2rf22" Feb 23 13:25:45 crc kubenswrapper[4851]: I0223 13:25:45.105866 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2rf22" podStartSLOduration=5.128050315 podStartE2EDuration="30.105844038s" podCreationTimestamp="2026-02-23 13:25:15 +0000 UTC" firstStartedPulling="2026-02-23 13:25:19.402224384 +0000 UTC m=+1074.083928062" lastFinishedPulling="2026-02-23 13:25:44.380018107 +0000 UTC m=+1099.061721785" observedRunningTime="2026-02-23 13:25:45.099037746 +0000 UTC m=+1099.780741444" watchObservedRunningTime="2026-02-23 13:25:45.105844038 +0000 UTC m=+1099.787547726" Feb 23 13:25:45 crc kubenswrapper[4851]: I0223 13:25:45.899283 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:45 crc kubenswrapper[4851]: E0223 13:25:45.899874 4851 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:25:45 crc kubenswrapper[4851]: E0223 13:25:45.899887 4851 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:25:45 crc kubenswrapper[4851]: E0223 13:25:45.899925 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift podName:7e3cd939-1e76-4a55-bb7b-614ae880e79c nodeName:}" failed. No retries permitted until 2026-02-23 13:25:47.899913251 +0000 UTC m=+1102.581616929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift") pod "swift-storage-0" (UID: "7e3cd939-1e76-4a55-bb7b-614ae880e79c") : configmap "swift-ring-files" not found Feb 23 13:25:45 crc kubenswrapper[4851]: I0223 13:25:45.986399 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1cf674-79a9-49b6-b482-8a04787e511e" path="/var/lib/kubelet/pods/ef1cf674-79a9-49b6-b482-8a04787e511e/volumes" Feb 23 13:25:46 crc kubenswrapper[4851]: I0223 13:25:46.077484 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-29d8z" event={"ID":"1c03be1e-abc1-4289-b6dc-ba4b3ac70614","Type":"ContainerStarted","Data":"5032e438bfae7ac60345d880dc3b07826033c3e1d71731017da340373b6a35df"} Feb 23 13:25:46 crc kubenswrapper[4851]: I0223 13:25:46.077631 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:46 crc kubenswrapper[4851]: I0223 13:25:46.111542 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-29d8z" podStartSLOduration=3.111520359 podStartE2EDuration="3.111520359s" podCreationTimestamp="2026-02-23 13:25:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:25:46.105980413 +0000 UTC m=+1100.787684101" watchObservedRunningTime="2026-02-23 13:25:46.111520359 +0000 UTC m=+1100.793224037" Feb 23 13:25:46 crc kubenswrapper[4851]: I0223 13:25:46.823084 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:46 crc kubenswrapper[4851]: I0223 13:25:46.952953 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 23 13:25:47 crc kubenswrapper[4851]: I0223 13:25:47.931258 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:47 crc kubenswrapper[4851]: E0223 13:25:47.931452 4851 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:25:47 crc kubenswrapper[4851]: E0223 13:25:47.931629 4851 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:25:47 crc kubenswrapper[4851]: E0223 13:25:47.931674 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift podName:7e3cd939-1e76-4a55-bb7b-614ae880e79c nodeName:}" failed. No retries permitted until 2026-02-23 13:25:51.93165913 +0000 UTC m=+1106.613362808 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift") pod "swift-storage-0" (UID: "7e3cd939-1e76-4a55-bb7b-614ae880e79c") : configmap "swift-ring-files" not found Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.096446 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3","Type":"ContainerStarted","Data":"1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d"} Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.097053 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.113447 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.887536468 podStartE2EDuration="36.113426414s" podCreationTimestamp="2026-02-23 13:25:12 +0000 UTC" firstStartedPulling="2026-02-23 13:25:19.173608854 +0000 UTC m=+1073.855312532" lastFinishedPulling="2026-02-23 13:25:47.39949879 +0000 UTC m=+1102.081202478" observedRunningTime="2026-02-23 13:25:48.110231563 +0000 UTC m=+1102.791935251" watchObservedRunningTime="2026-02-23 13:25:48.113426414 +0000 UTC m=+1102.795130092" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.138116 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-947gb"] Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.139422 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.141568 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.141997 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.142156 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.151371 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-947gb"] Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.235680 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-dispersionconf\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.235760 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-combined-ca-bundle\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.235787 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-scripts\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.235820 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wszt8\" (UniqueName: \"kubernetes.io/projected/365ea813-ed43-4771-a20a-d8ad58487d86-kube-api-access-wszt8\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.235844 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-ring-data-devices\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.235881 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/365ea813-ed43-4771-a20a-d8ad58487d86-etc-swift\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.235945 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-swiftconf\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.337296 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-dispersionconf\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.337754 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-combined-ca-bundle\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.337931 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-scripts\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.338119 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wszt8\" (UniqueName: \"kubernetes.io/projected/365ea813-ed43-4771-a20a-d8ad58487d86-kube-api-access-wszt8\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.339298 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-scripts\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.339775 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-ring-data-devices\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.339894 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-ring-data-devices\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.340842 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/365ea813-ed43-4771-a20a-d8ad58487d86-etc-swift\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.340985 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/365ea813-ed43-4771-a20a-d8ad58487d86-etc-swift\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.341270 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-swiftconf\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.343017 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-combined-ca-bundle\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.343134 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-dispersionconf\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.347581 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-swiftconf\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.357114 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wszt8\" (UniqueName: \"kubernetes.io/projected/365ea813-ed43-4771-a20a-d8ad58487d86-kube-api-access-wszt8\") pod \"swift-ring-rebalance-947gb\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.495042 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:25:48 crc kubenswrapper[4851]: I0223 13:25:48.921082 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-947gb"] Feb 23 13:25:48 crc kubenswrapper[4851]: W0223 13:25:48.931043 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod365ea813_ed43_4771_a20a_d8ad58487d86.slice/crio-5c70dee143dbbd7879d12394c8cf81cc07cb2eb15e60e73428faf417742d916a WatchSource:0}: Error finding container 5c70dee143dbbd7879d12394c8cf81cc07cb2eb15e60e73428faf417742d916a: Status 404 returned error can't find the container with id 5c70dee143dbbd7879d12394c8cf81cc07cb2eb15e60e73428faf417742d916a Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.103517 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-947gb" event={"ID":"365ea813-ed43-4771-a20a-d8ad58487d86","Type":"ContainerStarted","Data":"5c70dee143dbbd7879d12394c8cf81cc07cb2eb15e60e73428faf417742d916a"} Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.339995 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.340045 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.418958 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9zr5r"] Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.420193 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.425714 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.427159 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9zr5r"] Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.468918 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.568670 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-operator-scripts\") pod \"root-account-create-update-9zr5r\" (UID: \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\") " pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.568838 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5d9g\" (UniqueName: \"kubernetes.io/projected/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-kube-api-access-x5d9g\") pod \"root-account-create-update-9zr5r\" (UID: \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\") " pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.671937 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-operator-scripts\") pod \"root-account-create-update-9zr5r\" (UID: \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\") " pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.672026 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5d9g\" (UniqueName: \"kubernetes.io/projected/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-kube-api-access-x5d9g\") pod \"root-account-create-update-9zr5r\" (UID: \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\") " pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.673451 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-operator-scripts\") pod \"root-account-create-update-9zr5r\" (UID: \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\") " pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.692733 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5d9g\" (UniqueName: \"kubernetes.io/projected/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-kube-api-access-x5d9g\") pod \"root-account-create-update-9zr5r\" (UID: \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\") " pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:49 crc kubenswrapper[4851]: I0223 13:25:49.778943 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:50 crc kubenswrapper[4851]: I0223 13:25:50.189281 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 23 13:25:50 crc kubenswrapper[4851]: I0223 13:25:50.302583 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9zr5r"] Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.172418 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-wwt5b"] Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.173500 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.180934 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wwt5b"] Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.254868 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d081-account-create-update-xsbtx"] Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.256255 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.258723 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.276652 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d081-account-create-update-xsbtx"] Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.298418 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-operator-scripts\") pod \"glance-db-create-wwt5b\" (UID: \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\") " pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.298499 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvcjd\" (UniqueName: \"kubernetes.io/projected/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-kube-api-access-cvcjd\") pod \"glance-db-create-wwt5b\" (UID: \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\") " pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.401523 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-operator-scripts\") pod \"glance-db-create-wwt5b\" (UID: \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\") " pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.401642 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvcjd\" (UniqueName: \"kubernetes.io/projected/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-kube-api-access-cvcjd\") pod \"glance-db-create-wwt5b\" (UID: \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\") " pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.401696 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t8rv\" (UniqueName: \"kubernetes.io/projected/d49c3d1e-e4c6-42c7-8132-30aad920eade-kube-api-access-2t8rv\") pod \"glance-d081-account-create-update-xsbtx\" (UID: \"d49c3d1e-e4c6-42c7-8132-30aad920eade\") " pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.401787 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49c3d1e-e4c6-42c7-8132-30aad920eade-operator-scripts\") pod \"glance-d081-account-create-update-xsbtx\" (UID: \"d49c3d1e-e4c6-42c7-8132-30aad920eade\") " pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.402390 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-operator-scripts\") pod \"glance-db-create-wwt5b\" (UID: \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\") " pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.423664 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvcjd\" (UniqueName: \"kubernetes.io/projected/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-kube-api-access-cvcjd\") pod \"glance-db-create-wwt5b\" (UID: \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\") " pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.501513 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.502693 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t8rv\" (UniqueName: \"kubernetes.io/projected/d49c3d1e-e4c6-42c7-8132-30aad920eade-kube-api-access-2t8rv\") pod \"glance-d081-account-create-update-xsbtx\" (UID: \"d49c3d1e-e4c6-42c7-8132-30aad920eade\") " pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.502744 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49c3d1e-e4c6-42c7-8132-30aad920eade-operator-scripts\") pod \"glance-d081-account-create-update-xsbtx\" (UID: \"d49c3d1e-e4c6-42c7-8132-30aad920eade\") " pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.503343 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49c3d1e-e4c6-42c7-8132-30aad920eade-operator-scripts\") pod \"glance-d081-account-create-update-xsbtx\" (UID: \"d49c3d1e-e4c6-42c7-8132-30aad920eade\") " pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.526971 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t8rv\" (UniqueName: \"kubernetes.io/projected/d49c3d1e-e4c6-42c7-8132-30aad920eade-kube-api-access-2t8rv\") pod \"glance-d081-account-create-update-xsbtx\" (UID: \"d49c3d1e-e4c6-42c7-8132-30aad920eade\") " pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.578823 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.877592 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7ssg8"] Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.879027 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:51 crc kubenswrapper[4851]: I0223 13:25:51.887315 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7ssg8"] Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.015189 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqgcq\" (UniqueName: \"kubernetes.io/projected/2c7a1472-be44-4ead-a548-7a377e357ea0-kube-api-access-cqgcq\") pod \"keystone-db-create-7ssg8\" (UID: \"2c7a1472-be44-4ead-a548-7a377e357ea0\") " pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.015322 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c7a1472-be44-4ead-a548-7a377e357ea0-operator-scripts\") pod \"keystone-db-create-7ssg8\" (UID: \"2c7a1472-be44-4ead-a548-7a377e357ea0\") " pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.015420 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:25:52 crc kubenswrapper[4851]: E0223 13:25:52.015579 4851 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:25:52 crc kubenswrapper[4851]: E0223 13:25:52.015594 4851 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:25:52 crc kubenswrapper[4851]: E0223 13:25:52.015646 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift podName:7e3cd939-1e76-4a55-bb7b-614ae880e79c nodeName:}" failed. No retries permitted until 2026-02-23 13:26:00.015625545 +0000 UTC m=+1114.697329223 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift") pod "swift-storage-0" (UID: "7e3cd939-1e76-4a55-bb7b-614ae880e79c") : configmap "swift-ring-files" not found Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.017765 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b5c5-account-create-update-97hg2"] Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.019231 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.024234 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.037219 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b5c5-account-create-update-97hg2"] Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.100589 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6ft7k"] Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.101746 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.107536 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6ft7k"] Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.116928 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c7a1472-be44-4ead-a548-7a377e357ea0-operator-scripts\") pod \"keystone-db-create-7ssg8\" (UID: \"2c7a1472-be44-4ead-a548-7a377e357ea0\") " pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.117013 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92g4l\" (UniqueName: \"kubernetes.io/projected/cb1aeb9c-194d-4b98-9109-c2474a0a8767-kube-api-access-92g4l\") pod \"keystone-b5c5-account-create-update-97hg2\" (UID: \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\") " pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.117081 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqgcq\" (UniqueName: \"kubernetes.io/projected/2c7a1472-be44-4ead-a548-7a377e357ea0-kube-api-access-cqgcq\") pod \"keystone-db-create-7ssg8\" (UID: \"2c7a1472-be44-4ead-a548-7a377e357ea0\") " pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.117139 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb1aeb9c-194d-4b98-9109-c2474a0a8767-operator-scripts\") pod \"keystone-b5c5-account-create-update-97hg2\" (UID: \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\") " pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.117803 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c7a1472-be44-4ead-a548-7a377e357ea0-operator-scripts\") pod \"keystone-db-create-7ssg8\" (UID: \"2c7a1472-be44-4ead-a548-7a377e357ea0\") " pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.139005 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqgcq\" (UniqueName: \"kubernetes.io/projected/2c7a1472-be44-4ead-a548-7a377e357ea0-kube-api-access-cqgcq\") pod \"keystone-db-create-7ssg8\" (UID: \"2c7a1472-be44-4ead-a548-7a377e357ea0\") " pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.204666 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0af4-account-create-update-tklbk"] Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.205830 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.206055 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.208273 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.213375 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0af4-account-create-update-tklbk"] Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.218393 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a270ad09-d30d-4100-be0f-cc026ba47238-operator-scripts\") pod \"placement-db-create-6ft7k\" (UID: \"a270ad09-d30d-4100-be0f-cc026ba47238\") " pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.218428 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92g4l\" (UniqueName: \"kubernetes.io/projected/cb1aeb9c-194d-4b98-9109-c2474a0a8767-kube-api-access-92g4l\") pod \"keystone-b5c5-account-create-update-97hg2\" (UID: \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\") " pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.218483 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rhml\" (UniqueName: \"kubernetes.io/projected/a270ad09-d30d-4100-be0f-cc026ba47238-kube-api-access-9rhml\") pod \"placement-db-create-6ft7k\" (UID: \"a270ad09-d30d-4100-be0f-cc026ba47238\") " pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.218632 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb1aeb9c-194d-4b98-9109-c2474a0a8767-operator-scripts\") pod \"keystone-b5c5-account-create-update-97hg2\" (UID: \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\") " pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.219886 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb1aeb9c-194d-4b98-9109-c2474a0a8767-operator-scripts\") pod \"keystone-b5c5-account-create-update-97hg2\" (UID: \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\") " pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.234540 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92g4l\" (UniqueName: \"kubernetes.io/projected/cb1aeb9c-194d-4b98-9109-c2474a0a8767-kube-api-access-92g4l\") pod \"keystone-b5c5-account-create-update-97hg2\" (UID: \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\") " pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.320944 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7tbc\" (UniqueName: \"kubernetes.io/projected/152d8a17-4503-471d-adf6-8dcbd8d337db-kube-api-access-q7tbc\") pod \"placement-0af4-account-create-update-tklbk\" (UID: \"152d8a17-4503-471d-adf6-8dcbd8d337db\") " pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.321031 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a270ad09-d30d-4100-be0f-cc026ba47238-operator-scripts\") pod \"placement-db-create-6ft7k\" (UID: \"a270ad09-d30d-4100-be0f-cc026ba47238\") " pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.321122 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rhml\" (UniqueName: \"kubernetes.io/projected/a270ad09-d30d-4100-be0f-cc026ba47238-kube-api-access-9rhml\") pod \"placement-db-create-6ft7k\" (UID: \"a270ad09-d30d-4100-be0f-cc026ba47238\") " pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.321155 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152d8a17-4503-471d-adf6-8dcbd8d337db-operator-scripts\") pod \"placement-0af4-account-create-update-tklbk\" (UID: \"152d8a17-4503-471d-adf6-8dcbd8d337db\") " pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.322628 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a270ad09-d30d-4100-be0f-cc026ba47238-operator-scripts\") pod \"placement-db-create-6ft7k\" (UID: \"a270ad09-d30d-4100-be0f-cc026ba47238\") " pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.351604 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.351739 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rhml\" (UniqueName: \"kubernetes.io/projected/a270ad09-d30d-4100-be0f-cc026ba47238-kube-api-access-9rhml\") pod \"placement-db-create-6ft7k\" (UID: \"a270ad09-d30d-4100-be0f-cc026ba47238\") " pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:52 crc kubenswrapper[4851]: W0223 13:25:52.398015 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82ddb3b2_3070_4d45_a408_2d0c3e3db5b7.slice/crio-0c3407fec093f22895e043f0ce04ec06b57101c7bff76013aa0fcad620863e2e WatchSource:0}: Error finding container 0c3407fec093f22895e043f0ce04ec06b57101c7bff76013aa0fcad620863e2e: Status 404 returned error can't find the container with id 0c3407fec093f22895e043f0ce04ec06b57101c7bff76013aa0fcad620863e2e Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.420636 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.422251 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152d8a17-4503-471d-adf6-8dcbd8d337db-operator-scripts\") pod \"placement-0af4-account-create-update-tklbk\" (UID: \"152d8a17-4503-471d-adf6-8dcbd8d337db\") " pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.422381 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7tbc\" (UniqueName: \"kubernetes.io/projected/152d8a17-4503-471d-adf6-8dcbd8d337db-kube-api-access-q7tbc\") pod \"placement-0af4-account-create-update-tklbk\" (UID: \"152d8a17-4503-471d-adf6-8dcbd8d337db\") " pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.423015 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152d8a17-4503-471d-adf6-8dcbd8d337db-operator-scripts\") pod \"placement-0af4-account-create-update-tklbk\" (UID: \"152d8a17-4503-471d-adf6-8dcbd8d337db\") " pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.440162 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7tbc\" (UniqueName: \"kubernetes.io/projected/152d8a17-4503-471d-adf6-8dcbd8d337db-kube-api-access-q7tbc\") pod \"placement-0af4-account-create-update-tklbk\" (UID: \"152d8a17-4503-471d-adf6-8dcbd8d337db\") " pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:52 crc kubenswrapper[4851]: I0223 13:25:52.609108 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.079268 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.119750 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b5c5-account-create-update-97hg2"] Feb 23 13:25:53 crc kubenswrapper[4851]: W0223 13:25:53.122396 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb1aeb9c_194d_4b98_9109_c2474a0a8767.slice/crio-8a8f0698f2076ccb334b61436c227298dfc4e82e640ffe01e75d984b87bbcbbf WatchSource:0}: Error finding container 8a8f0698f2076ccb334b61436c227298dfc4e82e640ffe01e75d984b87bbcbbf: Status 404 returned error can't find the container with id 8a8f0698f2076ccb334b61436c227298dfc4e82e640ffe01e75d984b87bbcbbf Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.134385 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wwt5b"] Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.163476 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b5c5-account-create-update-97hg2" event={"ID":"cb1aeb9c-194d-4b98-9109-c2474a0a8767","Type":"ContainerStarted","Data":"8a8f0698f2076ccb334b61436c227298dfc4e82e640ffe01e75d984b87bbcbbf"} Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.164873 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zr5r" event={"ID":"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7","Type":"ContainerStarted","Data":"fc0f15f4f74bd983ed422066f23056436521e781ac25b2ef0008971a01f5c960"} Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.164899 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zr5r" event={"ID":"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7","Type":"ContainerStarted","Data":"0c3407fec093f22895e043f0ce04ec06b57101c7bff76013aa0fcad620863e2e"} Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.167003 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-947gb" event={"ID":"365ea813-ed43-4771-a20a-d8ad58487d86","Type":"ContainerStarted","Data":"d2c557c8e3e0d942d1d504243eccee3269c8420284e7b630591f3afd10a27610"} Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.195670 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-9zr5r" podStartSLOduration=4.195650231 podStartE2EDuration="4.195650231s" podCreationTimestamp="2026-02-23 13:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:25:53.192399199 +0000 UTC m=+1107.874102897" watchObservedRunningTime="2026-02-23 13:25:53.195650231 +0000 UTC m=+1107.877353909" Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.225708 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d081-account-create-update-xsbtx"] Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.226959 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-947gb" podStartSLOduration=1.529120534 podStartE2EDuration="5.226939866s" podCreationTimestamp="2026-02-23 13:25:48 +0000 UTC" firstStartedPulling="2026-02-23 13:25:48.933115222 +0000 UTC m=+1103.614818900" lastFinishedPulling="2026-02-23 13:25:52.630934554 +0000 UTC m=+1107.312638232" observedRunningTime="2026-02-23 13:25:53.218087026 +0000 UTC m=+1107.899790704" watchObservedRunningTime="2026-02-23 13:25:53.226939866 +0000 UTC m=+1107.908643534" Feb 23 13:25:53 crc kubenswrapper[4851]: W0223 13:25:53.236979 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd49c3d1e_e4c6_42c7_8132_30aad920eade.slice/crio-31cae333d7d10fc126b197af84f297f570ce462425a0b617bfa7aea3a5c98d96 WatchSource:0}: Error finding container 31cae333d7d10fc126b197af84f297f570ce462425a0b617bfa7aea3a5c98d96: Status 404 returned error can't find the container with id 31cae333d7d10fc126b197af84f297f570ce462425a0b617bfa7aea3a5c98d96 Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.243953 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6ft7k"] Feb 23 13:25:53 crc kubenswrapper[4851]: W0223 13:25:53.249141 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda270ad09_d30d_4100_be0f_cc026ba47238.slice/crio-7f8e8a52428323c731e359452dd1d18ffa1f9bcb3a352c982feeb127dca6c412 WatchSource:0}: Error finding container 7f8e8a52428323c731e359452dd1d18ffa1f9bcb3a352c982feeb127dca6c412: Status 404 returned error can't find the container with id 7f8e8a52428323c731e359452dd1d18ffa1f9bcb3a352c982feeb127dca6c412 Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.250876 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0af4-account-create-update-tklbk"] Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.262529 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7ssg8"] Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.482511 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.583871 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5lfjs"] Feb 23 13:25:53 crc kubenswrapper[4851]: I0223 13:25:53.584097 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" podUID="6f911749-26ea-46ff-b63f-105bcd92c2b8" containerName="dnsmasq-dns" containerID="cri-o://35160ece8042e45142d86cd3a357f85302d0b42389996e7785a69779f925100b" gracePeriod=10 Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.176698 4851 generic.go:334] "Generic (PLEG): container finished" podID="152d8a17-4503-471d-adf6-8dcbd8d337db" containerID="a4a565f024bccf2aa765facf981ee999208b9f4cf418c2b0c54ff80aed0467ab" exitCode=0 Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.176854 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0af4-account-create-update-tklbk" event={"ID":"152d8a17-4503-471d-adf6-8dcbd8d337db","Type":"ContainerDied","Data":"a4a565f024bccf2aa765facf981ee999208b9f4cf418c2b0c54ff80aed0467ab"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.177223 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0af4-account-create-update-tklbk" event={"ID":"152d8a17-4503-471d-adf6-8dcbd8d337db","Type":"ContainerStarted","Data":"0758a7962080ed6c5ed970f11b7c932d31c413dbe4bb88f69c609f096c869a94"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.178583 4851 generic.go:334] "Generic (PLEG): container finished" podID="a270ad09-d30d-4100-be0f-cc026ba47238" containerID="0de3bcafb4fec9d09db18f7c343a8312d2cb3c9c7acd67a4086a3f4b6203dc93" exitCode=0 Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.178665 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6ft7k" event={"ID":"a270ad09-d30d-4100-be0f-cc026ba47238","Type":"ContainerDied","Data":"0de3bcafb4fec9d09db18f7c343a8312d2cb3c9c7acd67a4086a3f4b6203dc93"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.178696 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6ft7k" event={"ID":"a270ad09-d30d-4100-be0f-cc026ba47238","Type":"ContainerStarted","Data":"7f8e8a52428323c731e359452dd1d18ffa1f9bcb3a352c982feeb127dca6c412"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.180819 4851 generic.go:334] "Generic (PLEG): container finished" podID="6f911749-26ea-46ff-b63f-105bcd92c2b8" containerID="35160ece8042e45142d86cd3a357f85302d0b42389996e7785a69779f925100b" exitCode=0 Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.180859 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" event={"ID":"6f911749-26ea-46ff-b63f-105bcd92c2b8","Type":"ContainerDied","Data":"35160ece8042e45142d86cd3a357f85302d0b42389996e7785a69779f925100b"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.180876 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" event={"ID":"6f911749-26ea-46ff-b63f-105bcd92c2b8","Type":"ContainerDied","Data":"6b5eff6f90b4ab459f4b72929ae95de9ba09db4e5ef4bf56b438955b417a882c"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.180886 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b5eff6f90b4ab459f4b72929ae95de9ba09db4e5ef4bf56b438955b417a882c" Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.182418 4851 generic.go:334] "Generic (PLEG): container finished" podID="6ceb3b82-a91b-49e4-8cb3-437e775e1fbc" containerID="f6669806b9cb9484ea7726ba18d82e21bce154589e2b0ca3f749ac37d17d52eb" exitCode=0 Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.182455 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wwt5b" event={"ID":"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc","Type":"ContainerDied","Data":"f6669806b9cb9484ea7726ba18d82e21bce154589e2b0ca3f749ac37d17d52eb"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.182469 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wwt5b" event={"ID":"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc","Type":"ContainerStarted","Data":"fb0c84a67f5cf61ea6bcdf2b27553a2fba3cf64acee0000f74a41e579b37a5d7"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.183973 4851 generic.go:334] "Generic (PLEG): container finished" podID="cb1aeb9c-194d-4b98-9109-c2474a0a8767" containerID="54ace890021796066e431874b7676a56927731a7847fe88750045628c2f20947" exitCode=0 Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.184010 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b5c5-account-create-update-97hg2" event={"ID":"cb1aeb9c-194d-4b98-9109-c2474a0a8767","Type":"ContainerDied","Data":"54ace890021796066e431874b7676a56927731a7847fe88750045628c2f20947"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.185173 4851 generic.go:334] "Generic (PLEG): container finished" podID="82ddb3b2-3070-4d45-a408-2d0c3e3db5b7" containerID="fc0f15f4f74bd983ed422066f23056436521e781ac25b2ef0008971a01f5c960" exitCode=0 Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.185212 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zr5r" event={"ID":"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7","Type":"ContainerDied","Data":"fc0f15f4f74bd983ed422066f23056436521e781ac25b2ef0008971a01f5c960"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.188187 4851 generic.go:334] "Generic (PLEG): container finished" podID="d49c3d1e-e4c6-42c7-8132-30aad920eade" containerID="709a3599d3a247d11b2017c42124e403ad4a01c01ca959da61eb61a4718eb3dc" exitCode=0 Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.188235 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d081-account-create-update-xsbtx" event={"ID":"d49c3d1e-e4c6-42c7-8132-30aad920eade","Type":"ContainerDied","Data":"709a3599d3a247d11b2017c42124e403ad4a01c01ca959da61eb61a4718eb3dc"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.188253 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d081-account-create-update-xsbtx" event={"ID":"d49c3d1e-e4c6-42c7-8132-30aad920eade","Type":"ContainerStarted","Data":"31cae333d7d10fc126b197af84f297f570ce462425a0b617bfa7aea3a5c98d96"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.196598 4851 generic.go:334] "Generic (PLEG): container finished" podID="2c7a1472-be44-4ead-a548-7a377e357ea0" containerID="2b04fee2c209efb5ee8f0342572ed47679ea8236d73e963cfff81f56bd730391" exitCode=0 Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.196689 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7ssg8" event={"ID":"2c7a1472-be44-4ead-a548-7a377e357ea0","Type":"ContainerDied","Data":"2b04fee2c209efb5ee8f0342572ed47679ea8236d73e963cfff81f56bd730391"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.196760 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7ssg8" event={"ID":"2c7a1472-be44-4ead-a548-7a377e357ea0","Type":"ContainerStarted","Data":"b35515edb476511f65bef4e9966683134c509967f5e19ddbd2c5e16abb4efca8"} Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.270973 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.363639 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5qwn\" (UniqueName: \"kubernetes.io/projected/6f911749-26ea-46ff-b63f-105bcd92c2b8-kube-api-access-z5qwn\") pod \"6f911749-26ea-46ff-b63f-105bcd92c2b8\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.363765 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-dns-svc\") pod \"6f911749-26ea-46ff-b63f-105bcd92c2b8\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.363829 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-ovsdbserver-nb\") pod \"6f911749-26ea-46ff-b63f-105bcd92c2b8\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.363856 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-config\") pod \"6f911749-26ea-46ff-b63f-105bcd92c2b8\" (UID: \"6f911749-26ea-46ff-b63f-105bcd92c2b8\") " Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.368863 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f911749-26ea-46ff-b63f-105bcd92c2b8-kube-api-access-z5qwn" (OuterVolumeSpecName: "kube-api-access-z5qwn") pod "6f911749-26ea-46ff-b63f-105bcd92c2b8" (UID: "6f911749-26ea-46ff-b63f-105bcd92c2b8"). InnerVolumeSpecName "kube-api-access-z5qwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.400852 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f911749-26ea-46ff-b63f-105bcd92c2b8" (UID: "6f911749-26ea-46ff-b63f-105bcd92c2b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.409682 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f911749-26ea-46ff-b63f-105bcd92c2b8" (UID: "6f911749-26ea-46ff-b63f-105bcd92c2b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.412575 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-config" (OuterVolumeSpecName: "config") pod "6f911749-26ea-46ff-b63f-105bcd92c2b8" (UID: "6f911749-26ea-46ff-b63f-105bcd92c2b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.466990 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.467293 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.467409 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5qwn\" (UniqueName: \"kubernetes.io/projected/6f911749-26ea-46ff-b63f-105bcd92c2b8-kube-api-access-z5qwn\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:54 crc kubenswrapper[4851]: I0223 13:25:54.467514 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f911749-26ea-46ff-b63f-105bcd92c2b8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.204532 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-5lfjs" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.254462 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5lfjs"] Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.261484 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-5lfjs"] Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.575200 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.695897 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c7a1472-be44-4ead-a548-7a377e357ea0-operator-scripts\") pod \"2c7a1472-be44-4ead-a548-7a377e357ea0\" (UID: \"2c7a1472-be44-4ead-a548-7a377e357ea0\") " Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.696156 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqgcq\" (UniqueName: \"kubernetes.io/projected/2c7a1472-be44-4ead-a548-7a377e357ea0-kube-api-access-cqgcq\") pod \"2c7a1472-be44-4ead-a548-7a377e357ea0\" (UID: \"2c7a1472-be44-4ead-a548-7a377e357ea0\") " Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.699853 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c7a1472-be44-4ead-a548-7a377e357ea0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c7a1472-be44-4ead-a548-7a377e357ea0" (UID: "2c7a1472-be44-4ead-a548-7a377e357ea0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.718557 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c7a1472-be44-4ead-a548-7a377e357ea0-kube-api-access-cqgcq" (OuterVolumeSpecName: "kube-api-access-cqgcq") pod "2c7a1472-be44-4ead-a548-7a377e357ea0" (UID: "2c7a1472-be44-4ead-a548-7a377e357ea0"). InnerVolumeSpecName "kube-api-access-cqgcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.798020 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c7a1472-be44-4ead-a548-7a377e357ea0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.798047 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqgcq\" (UniqueName: \"kubernetes.io/projected/2c7a1472-be44-4ead-a548-7a377e357ea0-kube-api-access-cqgcq\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.827715 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.833951 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.857213 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.857303 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.882241 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.889099 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:55 crc kubenswrapper[4851]: I0223 13:25:55.980870 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f911749-26ea-46ff-b63f-105bcd92c2b8" path="/var/lib/kubelet/pods/6f911749-26ea-46ff-b63f-105bcd92c2b8/volumes" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.002700 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92g4l\" (UniqueName: \"kubernetes.io/projected/cb1aeb9c-194d-4b98-9109-c2474a0a8767-kube-api-access-92g4l\") pod \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\" (UID: \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.002765 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a270ad09-d30d-4100-be0f-cc026ba47238-operator-scripts\") pod \"a270ad09-d30d-4100-be0f-cc026ba47238\" (UID: \"a270ad09-d30d-4100-be0f-cc026ba47238\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.002805 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152d8a17-4503-471d-adf6-8dcbd8d337db-operator-scripts\") pod \"152d8a17-4503-471d-adf6-8dcbd8d337db\" (UID: \"152d8a17-4503-471d-adf6-8dcbd8d337db\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.002851 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5d9g\" (UniqueName: \"kubernetes.io/projected/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-kube-api-access-x5d9g\") pod \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\" (UID: \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.002893 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-operator-scripts\") pod \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\" (UID: \"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.002914 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvcjd\" (UniqueName: \"kubernetes.io/projected/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-kube-api-access-cvcjd\") pod \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\" (UID: \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.002946 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb1aeb9c-194d-4b98-9109-c2474a0a8767-operator-scripts\") pod \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\" (UID: \"cb1aeb9c-194d-4b98-9109-c2474a0a8767\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.002971 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-operator-scripts\") pod \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\" (UID: \"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.003016 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7tbc\" (UniqueName: \"kubernetes.io/projected/152d8a17-4503-471d-adf6-8dcbd8d337db-kube-api-access-q7tbc\") pod \"152d8a17-4503-471d-adf6-8dcbd8d337db\" (UID: \"152d8a17-4503-471d-adf6-8dcbd8d337db\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.003076 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49c3d1e-e4c6-42c7-8132-30aad920eade-operator-scripts\") pod \"d49c3d1e-e4c6-42c7-8132-30aad920eade\" (UID: \"d49c3d1e-e4c6-42c7-8132-30aad920eade\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.003102 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rhml\" (UniqueName: \"kubernetes.io/projected/a270ad09-d30d-4100-be0f-cc026ba47238-kube-api-access-9rhml\") pod \"a270ad09-d30d-4100-be0f-cc026ba47238\" (UID: \"a270ad09-d30d-4100-be0f-cc026ba47238\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.003124 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t8rv\" (UniqueName: \"kubernetes.io/projected/d49c3d1e-e4c6-42c7-8132-30aad920eade-kube-api-access-2t8rv\") pod \"d49c3d1e-e4c6-42c7-8132-30aad920eade\" (UID: \"d49c3d1e-e4c6-42c7-8132-30aad920eade\") " Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.003499 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/152d8a17-4503-471d-adf6-8dcbd8d337db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "152d8a17-4503-471d-adf6-8dcbd8d337db" (UID: "152d8a17-4503-471d-adf6-8dcbd8d337db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.003753 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a270ad09-d30d-4100-be0f-cc026ba47238-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a270ad09-d30d-4100-be0f-cc026ba47238" (UID: "a270ad09-d30d-4100-be0f-cc026ba47238"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.003779 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb1aeb9c-194d-4b98-9109-c2474a0a8767-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb1aeb9c-194d-4b98-9109-c2474a0a8767" (UID: "cb1aeb9c-194d-4b98-9109-c2474a0a8767"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.004126 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ceb3b82-a91b-49e4-8cb3-437e775e1fbc" (UID: "6ceb3b82-a91b-49e4-8cb3-437e775e1fbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.004137 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "82ddb3b2-3070-4d45-a408-2d0c3e3db5b7" (UID: "82ddb3b2-3070-4d45-a408-2d0c3e3db5b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.004205 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49c3d1e-e4c6-42c7-8132-30aad920eade-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d49c3d1e-e4c6-42c7-8132-30aad920eade" (UID: "d49c3d1e-e4c6-42c7-8132-30aad920eade"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.005982 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1aeb9c-194d-4b98-9109-c2474a0a8767-kube-api-access-92g4l" (OuterVolumeSpecName: "kube-api-access-92g4l") pod "cb1aeb9c-194d-4b98-9109-c2474a0a8767" (UID: "cb1aeb9c-194d-4b98-9109-c2474a0a8767"). InnerVolumeSpecName "kube-api-access-92g4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.006497 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-kube-api-access-cvcjd" (OuterVolumeSpecName: "kube-api-access-cvcjd") pod "6ceb3b82-a91b-49e4-8cb3-437e775e1fbc" (UID: "6ceb3b82-a91b-49e4-8cb3-437e775e1fbc"). InnerVolumeSpecName "kube-api-access-cvcjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.006948 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-kube-api-access-x5d9g" (OuterVolumeSpecName: "kube-api-access-x5d9g") pod "82ddb3b2-3070-4d45-a408-2d0c3e3db5b7" (UID: "82ddb3b2-3070-4d45-a408-2d0c3e3db5b7"). InnerVolumeSpecName "kube-api-access-x5d9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.007595 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a270ad09-d30d-4100-be0f-cc026ba47238-kube-api-access-9rhml" (OuterVolumeSpecName: "kube-api-access-9rhml") pod "a270ad09-d30d-4100-be0f-cc026ba47238" (UID: "a270ad09-d30d-4100-be0f-cc026ba47238"). InnerVolumeSpecName "kube-api-access-9rhml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.009537 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/152d8a17-4503-471d-adf6-8dcbd8d337db-kube-api-access-q7tbc" (OuterVolumeSpecName: "kube-api-access-q7tbc") pod "152d8a17-4503-471d-adf6-8dcbd8d337db" (UID: "152d8a17-4503-471d-adf6-8dcbd8d337db"). InnerVolumeSpecName "kube-api-access-q7tbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.009753 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49c3d1e-e4c6-42c7-8132-30aad920eade-kube-api-access-2t8rv" (OuterVolumeSpecName: "kube-api-access-2t8rv") pod "d49c3d1e-e4c6-42c7-8132-30aad920eade" (UID: "d49c3d1e-e4c6-42c7-8132-30aad920eade"). InnerVolumeSpecName "kube-api-access-2t8rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104820 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5d9g\" (UniqueName: \"kubernetes.io/projected/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-kube-api-access-x5d9g\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104851 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104861 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvcjd\" (UniqueName: \"kubernetes.io/projected/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-kube-api-access-cvcjd\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104871 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104881 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb1aeb9c-194d-4b98-9109-c2474a0a8767-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104892 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7tbc\" (UniqueName: \"kubernetes.io/projected/152d8a17-4503-471d-adf6-8dcbd8d337db-kube-api-access-q7tbc\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104903 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d49c3d1e-e4c6-42c7-8132-30aad920eade-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104914 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rhml\" (UniqueName: \"kubernetes.io/projected/a270ad09-d30d-4100-be0f-cc026ba47238-kube-api-access-9rhml\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104924 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t8rv\" (UniqueName: \"kubernetes.io/projected/d49c3d1e-e4c6-42c7-8132-30aad920eade-kube-api-access-2t8rv\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104934 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92g4l\" (UniqueName: \"kubernetes.io/projected/cb1aeb9c-194d-4b98-9109-c2474a0a8767-kube-api-access-92g4l\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104945 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a270ad09-d30d-4100-be0f-cc026ba47238-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.104954 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/152d8a17-4503-471d-adf6-8dcbd8d337db-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.212149 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9zr5r" event={"ID":"82ddb3b2-3070-4d45-a408-2d0c3e3db5b7","Type":"ContainerDied","Data":"0c3407fec093f22895e043f0ce04ec06b57101c7bff76013aa0fcad620863e2e"} Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.212190 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c3407fec093f22895e043f0ce04ec06b57101c7bff76013aa0fcad620863e2e" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.212190 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9zr5r" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.215596 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d081-account-create-update-xsbtx" event={"ID":"d49c3d1e-e4c6-42c7-8132-30aad920eade","Type":"ContainerDied","Data":"31cae333d7d10fc126b197af84f297f570ce462425a0b617bfa7aea3a5c98d96"} Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.215646 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31cae333d7d10fc126b197af84f297f570ce462425a0b617bfa7aea3a5c98d96" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.215668 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d081-account-create-update-xsbtx" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.217493 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7ssg8" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.217513 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7ssg8" event={"ID":"2c7a1472-be44-4ead-a548-7a377e357ea0","Type":"ContainerDied","Data":"b35515edb476511f65bef4e9966683134c509967f5e19ddbd2c5e16abb4efca8"} Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.217537 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b35515edb476511f65bef4e9966683134c509967f5e19ddbd2c5e16abb4efca8" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.224887 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0af4-account-create-update-tklbk" event={"ID":"152d8a17-4503-471d-adf6-8dcbd8d337db","Type":"ContainerDied","Data":"0758a7962080ed6c5ed970f11b7c932d31c413dbe4bb88f69c609f096c869a94"} Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.224932 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0af4-account-create-update-tklbk" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.224950 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0758a7962080ed6c5ed970f11b7c932d31c413dbe4bb88f69c609f096c869a94" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.227427 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6ft7k" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.227471 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6ft7k" event={"ID":"a270ad09-d30d-4100-be0f-cc026ba47238","Type":"ContainerDied","Data":"7f8e8a52428323c731e359452dd1d18ffa1f9bcb3a352c982feeb127dca6c412"} Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.227501 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f8e8a52428323c731e359452dd1d18ffa1f9bcb3a352c982feeb127dca6c412" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.229318 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wwt5b" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.229860 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wwt5b" event={"ID":"6ceb3b82-a91b-49e4-8cb3-437e775e1fbc","Type":"ContainerDied","Data":"fb0c84a67f5cf61ea6bcdf2b27553a2fba3cf64acee0000f74a41e579b37a5d7"} Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.229884 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0c84a67f5cf61ea6bcdf2b27553a2fba3cf64acee0000f74a41e579b37a5d7" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.231464 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b5c5-account-create-update-97hg2" event={"ID":"cb1aeb9c-194d-4b98-9109-c2474a0a8767","Type":"ContainerDied","Data":"8a8f0698f2076ccb334b61436c227298dfc4e82e640ffe01e75d984b87bbcbbf"} Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.231484 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a8f0698f2076ccb334b61436c227298dfc4e82e640ffe01e75d984b87bbcbbf" Feb 23 13:25:56 crc kubenswrapper[4851]: I0223 13:25:56.231523 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b5c5-account-create-update-97hg2" Feb 23 13:25:57 crc kubenswrapper[4851]: I0223 13:25:57.986174 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9zr5r"] Feb 23 13:25:57 crc kubenswrapper[4851]: I0223 13:25:57.986480 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9zr5r"] Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.085700 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2d2fh"] Feb 23 13:25:58 crc kubenswrapper[4851]: E0223 13:25:58.086076 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f911749-26ea-46ff-b63f-105bcd92c2b8" containerName="init" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086096 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f911749-26ea-46ff-b63f-105bcd92c2b8" containerName="init" Feb 23 13:25:58 crc kubenswrapper[4851]: E0223 13:25:58.086113 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f911749-26ea-46ff-b63f-105bcd92c2b8" containerName="dnsmasq-dns" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086122 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f911749-26ea-46ff-b63f-105bcd92c2b8" containerName="dnsmasq-dns" Feb 23 13:25:58 crc kubenswrapper[4851]: E0223 13:25:58.086135 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49c3d1e-e4c6-42c7-8132-30aad920eade" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086144 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49c3d1e-e4c6-42c7-8132-30aad920eade" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: E0223 13:25:58.086164 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceb3b82-a91b-49e4-8cb3-437e775e1fbc" containerName="mariadb-database-create" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086172 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceb3b82-a91b-49e4-8cb3-437e775e1fbc" containerName="mariadb-database-create" Feb 23 13:25:58 crc kubenswrapper[4851]: E0223 13:25:58.086186 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1aeb9c-194d-4b98-9109-c2474a0a8767" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086194 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1aeb9c-194d-4b98-9109-c2474a0a8767" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: E0223 13:25:58.086214 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ddb3b2-3070-4d45-a408-2d0c3e3db5b7" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086223 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ddb3b2-3070-4d45-a408-2d0c3e3db5b7" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: E0223 13:25:58.086239 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="152d8a17-4503-471d-adf6-8dcbd8d337db" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086246 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="152d8a17-4503-471d-adf6-8dcbd8d337db" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: E0223 13:25:58.086256 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a270ad09-d30d-4100-be0f-cc026ba47238" containerName="mariadb-database-create" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086264 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a270ad09-d30d-4100-be0f-cc026ba47238" containerName="mariadb-database-create" Feb 23 13:25:58 crc kubenswrapper[4851]: E0223 13:25:58.086276 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c7a1472-be44-4ead-a548-7a377e357ea0" containerName="mariadb-database-create" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086283 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c7a1472-be44-4ead-a548-7a377e357ea0" containerName="mariadb-database-create" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086503 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ceb3b82-a91b-49e4-8cb3-437e775e1fbc" containerName="mariadb-database-create" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086520 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ddb3b2-3070-4d45-a408-2d0c3e3db5b7" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086529 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb1aeb9c-194d-4b98-9109-c2474a0a8767" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086542 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c7a1472-be44-4ead-a548-7a377e357ea0" containerName="mariadb-database-create" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086553 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="a270ad09-d30d-4100-be0f-cc026ba47238" containerName="mariadb-database-create" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086564 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f911749-26ea-46ff-b63f-105bcd92c2b8" containerName="dnsmasq-dns" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086576 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="152d8a17-4503-471d-adf6-8dcbd8d337db" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.086586 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49c3d1e-e4c6-42c7-8132-30aad920eade" containerName="mariadb-account-create-update" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.087168 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2d2fh" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.090188 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.095818 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2d2fh"] Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.235577 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jps6m\" (UniqueName: \"kubernetes.io/projected/7a0f70a1-0ccf-4141-bce1-62cf620c492c-kube-api-access-jps6m\") pod \"root-account-create-update-2d2fh\" (UID: \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\") " pod="openstack/root-account-create-update-2d2fh" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.235770 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a0f70a1-0ccf-4141-bce1-62cf620c492c-operator-scripts\") pod \"root-account-create-update-2d2fh\" (UID: \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\") " pod="openstack/root-account-create-update-2d2fh" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.337522 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jps6m\" (UniqueName: \"kubernetes.io/projected/7a0f70a1-0ccf-4141-bce1-62cf620c492c-kube-api-access-jps6m\") pod \"root-account-create-update-2d2fh\" (UID: \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\") " pod="openstack/root-account-create-update-2d2fh" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.337595 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a0f70a1-0ccf-4141-bce1-62cf620c492c-operator-scripts\") pod \"root-account-create-update-2d2fh\" (UID: \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\") " pod="openstack/root-account-create-update-2d2fh" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.338497 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a0f70a1-0ccf-4141-bce1-62cf620c492c-operator-scripts\") pod \"root-account-create-update-2d2fh\" (UID: \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\") " pod="openstack/root-account-create-update-2d2fh" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.367281 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jps6m\" (UniqueName: \"kubernetes.io/projected/7a0f70a1-0ccf-4141-bce1-62cf620c492c-kube-api-access-jps6m\") pod \"root-account-create-update-2d2fh\" (UID: \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\") " pod="openstack/root-account-create-update-2d2fh" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.404077 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2d2fh" Feb 23 13:25:58 crc kubenswrapper[4851]: I0223 13:25:58.826967 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2d2fh"] Feb 23 13:25:59 crc kubenswrapper[4851]: I0223 13:25:59.255489 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2d2fh" event={"ID":"7a0f70a1-0ccf-4141-bce1-62cf620c492c","Type":"ContainerStarted","Data":"527777b3b2622bd579d951f62fa95637f1a51a098114301af1eb46fb6a654d85"} Feb 23 13:25:59 crc kubenswrapper[4851]: I0223 13:25:59.978592 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ddb3b2-3070-4d45-a408-2d0c3e3db5b7" path="/var/lib/kubelet/pods/82ddb3b2-3070-4d45-a408-2d0c3e3db5b7/volumes" Feb 23 13:26:00 crc kubenswrapper[4851]: I0223 13:26:00.064918 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:26:00 crc kubenswrapper[4851]: E0223 13:26:00.065197 4851 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 13:26:00 crc kubenswrapper[4851]: E0223 13:26:00.065235 4851 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 13:26:00 crc kubenswrapper[4851]: E0223 13:26:00.065351 4851 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift podName:7e3cd939-1e76-4a55-bb7b-614ae880e79c nodeName:}" failed. No retries permitted until 2026-02-23 13:26:16.065286343 +0000 UTC m=+1130.746990081 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift") pod "swift-storage-0" (UID: "7e3cd939-1e76-4a55-bb7b-614ae880e79c") : configmap "swift-ring-files" not found Feb 23 13:26:00 crc kubenswrapper[4851]: I0223 13:26:00.263089 4851 generic.go:334] "Generic (PLEG): container finished" podID="365ea813-ed43-4771-a20a-d8ad58487d86" containerID="d2c557c8e3e0d942d1d504243eccee3269c8420284e7b630591f3afd10a27610" exitCode=0 Feb 23 13:26:00 crc kubenswrapper[4851]: I0223 13:26:00.263175 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-947gb" event={"ID":"365ea813-ed43-4771-a20a-d8ad58487d86","Type":"ContainerDied","Data":"d2c557c8e3e0d942d1d504243eccee3269c8420284e7b630591f3afd10a27610"} Feb 23 13:26:00 crc kubenswrapper[4851]: I0223 13:26:00.264766 4851 generic.go:334] "Generic (PLEG): container finished" podID="7a0f70a1-0ccf-4141-bce1-62cf620c492c" containerID="4774757c90c84af39a4b41cfecb98ad845c4216bb3a362fa221e71db14828f30" exitCode=0 Feb 23 13:26:00 crc kubenswrapper[4851]: I0223 13:26:00.264804 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2d2fh" event={"ID":"7a0f70a1-0ccf-4141-bce1-62cf620c492c","Type":"ContainerDied","Data":"4774757c90c84af39a4b41cfecb98ad845c4216bb3a362fa221e71db14828f30"} Feb 23 13:26:00 crc kubenswrapper[4851]: I0223 13:26:00.579348 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.490146 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6sw4p"] Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.491711 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.493968 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-69264" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.494239 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.513487 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6sw4p"] Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.592087 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-db-sync-config-data\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.592165 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-combined-ca-bundle\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.592202 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-config-data\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.592257 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cs4c\" (UniqueName: \"kubernetes.io/projected/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-kube-api-access-9cs4c\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.694052 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cs4c\" (UniqueName: \"kubernetes.io/projected/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-kube-api-access-9cs4c\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.694151 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-db-sync-config-data\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.695285 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-combined-ca-bundle\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.695348 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-config-data\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.700432 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-config-data\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.700466 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-combined-ca-bundle\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.700589 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-db-sync-config-data\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.713988 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cs4c\" (UniqueName: \"kubernetes.io/projected/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-kube-api-access-9cs4c\") pod \"glance-db-sync-6sw4p\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.763869 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.769572 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2d2fh" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.808523 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.898367 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jps6m\" (UniqueName: \"kubernetes.io/projected/7a0f70a1-0ccf-4141-bce1-62cf620c492c-kube-api-access-jps6m\") pod \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\" (UID: \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\") " Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.898410 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a0f70a1-0ccf-4141-bce1-62cf620c492c-operator-scripts\") pod \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\" (UID: \"7a0f70a1-0ccf-4141-bce1-62cf620c492c\") " Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.898439 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-ring-data-devices\") pod \"365ea813-ed43-4771-a20a-d8ad58487d86\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.898483 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wszt8\" (UniqueName: \"kubernetes.io/projected/365ea813-ed43-4771-a20a-d8ad58487d86-kube-api-access-wszt8\") pod \"365ea813-ed43-4771-a20a-d8ad58487d86\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.898551 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/365ea813-ed43-4771-a20a-d8ad58487d86-etc-swift\") pod \"365ea813-ed43-4771-a20a-d8ad58487d86\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.898610 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-scripts\") pod \"365ea813-ed43-4771-a20a-d8ad58487d86\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.898646 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-swiftconf\") pod \"365ea813-ed43-4771-a20a-d8ad58487d86\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.898660 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-combined-ca-bundle\") pod \"365ea813-ed43-4771-a20a-d8ad58487d86\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.898699 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-dispersionconf\") pod \"365ea813-ed43-4771-a20a-d8ad58487d86\" (UID: \"365ea813-ed43-4771-a20a-d8ad58487d86\") " Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.899953 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365ea813-ed43-4771-a20a-d8ad58487d86-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "365ea813-ed43-4771-a20a-d8ad58487d86" (UID: "365ea813-ed43-4771-a20a-d8ad58487d86"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.899998 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0f70a1-0ccf-4141-bce1-62cf620c492c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a0f70a1-0ccf-4141-bce1-62cf620c492c" (UID: "7a0f70a1-0ccf-4141-bce1-62cf620c492c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.900015 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "365ea813-ed43-4771-a20a-d8ad58487d86" (UID: "365ea813-ed43-4771-a20a-d8ad58487d86"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.903504 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0f70a1-0ccf-4141-bce1-62cf620c492c-kube-api-access-jps6m" (OuterVolumeSpecName: "kube-api-access-jps6m") pod "7a0f70a1-0ccf-4141-bce1-62cf620c492c" (UID: "7a0f70a1-0ccf-4141-bce1-62cf620c492c"). InnerVolumeSpecName "kube-api-access-jps6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.903666 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365ea813-ed43-4771-a20a-d8ad58487d86-kube-api-access-wszt8" (OuterVolumeSpecName: "kube-api-access-wszt8") pod "365ea813-ed43-4771-a20a-d8ad58487d86" (UID: "365ea813-ed43-4771-a20a-d8ad58487d86"). InnerVolumeSpecName "kube-api-access-wszt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.905608 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "365ea813-ed43-4771-a20a-d8ad58487d86" (UID: "365ea813-ed43-4771-a20a-d8ad58487d86"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.919812 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-scripts" (OuterVolumeSpecName: "scripts") pod "365ea813-ed43-4771-a20a-d8ad58487d86" (UID: "365ea813-ed43-4771-a20a-d8ad58487d86"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.922358 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "365ea813-ed43-4771-a20a-d8ad58487d86" (UID: "365ea813-ed43-4771-a20a-d8ad58487d86"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:01 crc kubenswrapper[4851]: I0223 13:26:01.924559 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "365ea813-ed43-4771-a20a-d8ad58487d86" (UID: "365ea813-ed43-4771-a20a-d8ad58487d86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.004441 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.004477 4851 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.004487 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.004499 4851 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/365ea813-ed43-4771-a20a-d8ad58487d86-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.004510 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jps6m\" (UniqueName: \"kubernetes.io/projected/7a0f70a1-0ccf-4141-bce1-62cf620c492c-kube-api-access-jps6m\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.004519 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a0f70a1-0ccf-4141-bce1-62cf620c492c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.004527 4851 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/365ea813-ed43-4771-a20a-d8ad58487d86-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.004565 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wszt8\" (UniqueName: \"kubernetes.io/projected/365ea813-ed43-4771-a20a-d8ad58487d86-kube-api-access-wszt8\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.004576 4851 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/365ea813-ed43-4771-a20a-d8ad58487d86-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.279055 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2d2fh" event={"ID":"7a0f70a1-0ccf-4141-bce1-62cf620c492c","Type":"ContainerDied","Data":"527777b3b2622bd579d951f62fa95637f1a51a098114301af1eb46fb6a654d85"} Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.279372 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527777b3b2622bd579d951f62fa95637f1a51a098114301af1eb46fb6a654d85" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.279078 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2d2fh" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.280121 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-947gb" event={"ID":"365ea813-ed43-4771-a20a-d8ad58487d86","Type":"ContainerDied","Data":"5c70dee143dbbd7879d12394c8cf81cc07cb2eb15e60e73428faf417742d916a"} Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.280173 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c70dee143dbbd7879d12394c8cf81cc07cb2eb15e60e73428faf417742d916a" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.280216 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-947gb" Feb 23 13:26:02 crc kubenswrapper[4851]: I0223 13:26:02.293178 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6sw4p"] Feb 23 13:26:03 crc kubenswrapper[4851]: I0223 13:26:03.288716 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6sw4p" event={"ID":"b07f3810-fe79-4343-a6c7-0fa6e0281b2d","Type":"ContainerStarted","Data":"3dff08d46478b1aed53989efdc16233cf410e44e51346ee0a0e08b48e970c83d"} Feb 23 13:26:04 crc kubenswrapper[4851]: I0223 13:26:04.441145 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2d2fh"] Feb 23 13:26:04 crc kubenswrapper[4851]: I0223 13:26:04.449238 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2d2fh"] Feb 23 13:26:05 crc kubenswrapper[4851]: I0223 13:26:05.977591 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0f70a1-0ccf-4141-bce1-62cf620c492c" path="/var/lib/kubelet/pods/7a0f70a1-0ccf-4141-bce1-62cf620c492c/volumes" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.211246 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.218768 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-42p6n" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.451802 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2rf22-config-lq74l"] Feb 23 13:26:06 crc kubenswrapper[4851]: E0223 13:26:06.453090 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365ea813-ed43-4771-a20a-d8ad58487d86" containerName="swift-ring-rebalance" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.453117 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="365ea813-ed43-4771-a20a-d8ad58487d86" containerName="swift-ring-rebalance" Feb 23 13:26:06 crc kubenswrapper[4851]: E0223 13:26:06.453162 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0f70a1-0ccf-4141-bce1-62cf620c492c" containerName="mariadb-account-create-update" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.453171 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0f70a1-0ccf-4141-bce1-62cf620c492c" containerName="mariadb-account-create-update" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.477540 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="365ea813-ed43-4771-a20a-d8ad58487d86" containerName="swift-ring-rebalance" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.477659 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0f70a1-0ccf-4141-bce1-62cf620c492c" containerName="mariadb-account-create-update" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.479142 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.482372 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.485778 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2rf22-config-lq74l"] Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.511597 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-log-ovn\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.511646 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.511693 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq24w\" (UniqueName: \"kubernetes.io/projected/d374eb95-1961-4609-a5a8-3022938ceb54-kube-api-access-gq24w\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.511748 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-additional-scripts\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.511779 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run-ovn\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.511801 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-scripts\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.612792 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-log-ovn\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.612836 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.612881 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq24w\" (UniqueName: \"kubernetes.io/projected/d374eb95-1961-4609-a5a8-3022938ceb54-kube-api-access-gq24w\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.612944 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-additional-scripts\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.612974 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run-ovn\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.612992 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-scripts\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.613172 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-log-ovn\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.613203 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run-ovn\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.613203 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.613774 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-additional-scripts\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.615218 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-scripts\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.633171 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq24w\" (UniqueName: \"kubernetes.io/projected/d374eb95-1961-4609-a5a8-3022938ceb54-kube-api-access-gq24w\") pod \"ovn-controller-2rf22-config-lq74l\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:06 crc kubenswrapper[4851]: I0223 13:26:06.855225 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:07 crc kubenswrapper[4851]: I0223 13:26:07.317653 4851 generic.go:334] "Generic (PLEG): container finished" podID="ec010635-96e5-448a-98c1-e458fd6f31ed" containerID="e273fe812a61abead0849f007f3f26e978b68df2e0939cf7a163e3001984bc7a" exitCode=0 Feb 23 13:26:07 crc kubenswrapper[4851]: I0223 13:26:07.317751 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec010635-96e5-448a-98c1-e458fd6f31ed","Type":"ContainerDied","Data":"e273fe812a61abead0849f007f3f26e978b68df2e0939cf7a163e3001984bc7a"} Feb 23 13:26:07 crc kubenswrapper[4851]: I0223 13:26:07.319249 4851 generic.go:334] "Generic (PLEG): container finished" podID="46bf34c9-f0ec-4de6-ae40-fd334c23af27" containerID="de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b" exitCode=0 Feb 23 13:26:07 crc kubenswrapper[4851]: I0223 13:26:07.319299 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46bf34c9-f0ec-4de6-ae40-fd334c23af27","Type":"ContainerDied","Data":"de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b"} Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.454903 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tpxbb"] Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.456083 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.460347 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.471709 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tpxbb"] Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.559876 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjv92\" (UniqueName: \"kubernetes.io/projected/dbb17c35-0ab0-4089-9847-d2acfdb17332-kube-api-access-cjv92\") pod \"root-account-create-update-tpxbb\" (UID: \"dbb17c35-0ab0-4089-9847-d2acfdb17332\") " pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.559984 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb17c35-0ab0-4089-9847-d2acfdb17332-operator-scripts\") pod \"root-account-create-update-tpxbb\" (UID: \"dbb17c35-0ab0-4089-9847-d2acfdb17332\") " pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.661513 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjv92\" (UniqueName: \"kubernetes.io/projected/dbb17c35-0ab0-4089-9847-d2acfdb17332-kube-api-access-cjv92\") pod \"root-account-create-update-tpxbb\" (UID: \"dbb17c35-0ab0-4089-9847-d2acfdb17332\") " pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.661614 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb17c35-0ab0-4089-9847-d2acfdb17332-operator-scripts\") pod \"root-account-create-update-tpxbb\" (UID: \"dbb17c35-0ab0-4089-9847-d2acfdb17332\") " pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.662277 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb17c35-0ab0-4089-9847-d2acfdb17332-operator-scripts\") pod \"root-account-create-update-tpxbb\" (UID: \"dbb17c35-0ab0-4089-9847-d2acfdb17332\") " pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.681690 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjv92\" (UniqueName: \"kubernetes.io/projected/dbb17c35-0ab0-4089-9847-d2acfdb17332-kube-api-access-cjv92\") pod \"root-account-create-update-tpxbb\" (UID: \"dbb17c35-0ab0-4089-9847-d2acfdb17332\") " pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:09 crc kubenswrapper[4851]: I0223 13:26:09.775051 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:11 crc kubenswrapper[4851]: I0223 13:26:11.924814 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:26:11 crc kubenswrapper[4851]: I0223 13:26:11.925074 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.222800 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2rf22-config-lq74l"] Feb 23 13:26:13 crc kubenswrapper[4851]: W0223 13:26:13.228775 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd374eb95_1961_4609_a5a8_3022938ceb54.slice/crio-c492b90df5e003565d6659aba558f3890f186a7e2449a66e1ce72a33f370c6c6 WatchSource:0}: Error finding container c492b90df5e003565d6659aba558f3890f186a7e2449a66e1ce72a33f370c6c6: Status 404 returned error can't find the container with id c492b90df5e003565d6659aba558f3890f186a7e2449a66e1ce72a33f370c6c6 Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.318431 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tpxbb"] Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.373890 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46bf34c9-f0ec-4de6-ae40-fd334c23af27","Type":"ContainerStarted","Data":"f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a"} Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.374139 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.375258 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2rf22-config-lq74l" event={"ID":"d374eb95-1961-4609-a5a8-3022938ceb54","Type":"ContainerStarted","Data":"c492b90df5e003565d6659aba558f3890f186a7e2449a66e1ce72a33f370c6c6"} Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.376243 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tpxbb" event={"ID":"dbb17c35-0ab0-4089-9847-d2acfdb17332","Type":"ContainerStarted","Data":"468dbf06d15e9d61eb9d02fd4b5b70f9f5186042db68de1e4eaf9590d7335613"} Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.379614 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec010635-96e5-448a-98c1-e458fd6f31ed","Type":"ContainerStarted","Data":"9da6a99616ae7b031212de15b4177e80e168f11ac0fde7fc4f50bbd2312d38b8"} Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.380446 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.401945 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=53.781337456 podStartE2EDuration="1m7.401925466s" podCreationTimestamp="2026-02-23 13:25:06 +0000 UTC" firstStartedPulling="2026-02-23 13:25:19.202119611 +0000 UTC m=+1073.883823279" lastFinishedPulling="2026-02-23 13:25:32.822707611 +0000 UTC m=+1087.504411289" observedRunningTime="2026-02-23 13:26:13.40098838 +0000 UTC m=+1128.082692078" watchObservedRunningTime="2026-02-23 13:26:13.401925466 +0000 UTC m=+1128.083629144" Feb 23 13:26:13 crc kubenswrapper[4851]: I0223 13:26:13.432653 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=54.596221788 podStartE2EDuration="1m7.432635775s" podCreationTimestamp="2026-02-23 13:25:06 +0000 UTC" firstStartedPulling="2026-02-23 13:25:19.209785348 +0000 UTC m=+1073.891489026" lastFinishedPulling="2026-02-23 13:25:32.046199335 +0000 UTC m=+1086.727903013" observedRunningTime="2026-02-23 13:26:13.426255955 +0000 UTC m=+1128.107959643" watchObservedRunningTime="2026-02-23 13:26:13.432635775 +0000 UTC m=+1128.114339453" Feb 23 13:26:14 crc kubenswrapper[4851]: I0223 13:26:14.387250 4851 generic.go:334] "Generic (PLEG): container finished" podID="dbb17c35-0ab0-4089-9847-d2acfdb17332" containerID="b09c6d6d23b3aeee601b10d7ac6e559e18764bf963f4ebb98f177785b5b08827" exitCode=0 Feb 23 13:26:14 crc kubenswrapper[4851]: I0223 13:26:14.387319 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tpxbb" event={"ID":"dbb17c35-0ab0-4089-9847-d2acfdb17332","Type":"ContainerDied","Data":"b09c6d6d23b3aeee601b10d7ac6e559e18764bf963f4ebb98f177785b5b08827"} Feb 23 13:26:14 crc kubenswrapper[4851]: I0223 13:26:14.389168 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6sw4p" event={"ID":"b07f3810-fe79-4343-a6c7-0fa6e0281b2d","Type":"ContainerStarted","Data":"2a1bc1ca0bfea6d912442719076208c8c61f56dc08b5936b91919337e368c305"} Feb 23 13:26:14 crc kubenswrapper[4851]: I0223 13:26:14.390513 4851 generic.go:334] "Generic (PLEG): container finished" podID="d374eb95-1961-4609-a5a8-3022938ceb54" containerID="d9908e734ffd1681ee71fa90fd4e0d7755811d0799fafa8695ccaf2f6bc05f0b" exitCode=0 Feb 23 13:26:14 crc kubenswrapper[4851]: I0223 13:26:14.390597 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2rf22-config-lq74l" event={"ID":"d374eb95-1961-4609-a5a8-3022938ceb54","Type":"ContainerDied","Data":"d9908e734ffd1681ee71fa90fd4e0d7755811d0799fafa8695ccaf2f6bc05f0b"} Feb 23 13:26:14 crc kubenswrapper[4851]: I0223 13:26:14.438029 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6sw4p" podStartSLOduration=2.89089122 podStartE2EDuration="13.43800847s" podCreationTimestamp="2026-02-23 13:26:01 +0000 UTC" firstStartedPulling="2026-02-23 13:26:02.303105987 +0000 UTC m=+1116.984809665" lastFinishedPulling="2026-02-23 13:26:12.850223237 +0000 UTC m=+1127.531926915" observedRunningTime="2026-02-23 13:26:14.428517772 +0000 UTC m=+1129.110221450" watchObservedRunningTime="2026-02-23 13:26:14.43800847 +0000 UTC m=+1129.119712148" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.683870 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.773552 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873190 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb17c35-0ab0-4089-9847-d2acfdb17332-operator-scripts\") pod \"dbb17c35-0ab0-4089-9847-d2acfdb17332\" (UID: \"dbb17c35-0ab0-4089-9847-d2acfdb17332\") " Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873295 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run\") pod \"d374eb95-1961-4609-a5a8-3022938ceb54\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873353 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-log-ovn\") pod \"d374eb95-1961-4609-a5a8-3022938ceb54\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873414 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run-ovn\") pod \"d374eb95-1961-4609-a5a8-3022938ceb54\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873465 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjv92\" (UniqueName: \"kubernetes.io/projected/dbb17c35-0ab0-4089-9847-d2acfdb17332-kube-api-access-cjv92\") pod \"dbb17c35-0ab0-4089-9847-d2acfdb17332\" (UID: \"dbb17c35-0ab0-4089-9847-d2acfdb17332\") " Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873494 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-scripts\") pod \"d374eb95-1961-4609-a5a8-3022938ceb54\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873388 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run" (OuterVolumeSpecName: "var-run") pod "d374eb95-1961-4609-a5a8-3022938ceb54" (UID: "d374eb95-1961-4609-a5a8-3022938ceb54"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873509 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-additional-scripts\") pod \"d374eb95-1961-4609-a5a8-3022938ceb54\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873539 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq24w\" (UniqueName: \"kubernetes.io/projected/d374eb95-1961-4609-a5a8-3022938ceb54-kube-api-access-gq24w\") pod \"d374eb95-1961-4609-a5a8-3022938ceb54\" (UID: \"d374eb95-1961-4609-a5a8-3022938ceb54\") " Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873495 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d374eb95-1961-4609-a5a8-3022938ceb54" (UID: "d374eb95-1961-4609-a5a8-3022938ceb54"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873440 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d374eb95-1961-4609-a5a8-3022938ceb54" (UID: "d374eb95-1961-4609-a5a8-3022938ceb54"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.873798 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbb17c35-0ab0-4089-9847-d2acfdb17332-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbb17c35-0ab0-4089-9847-d2acfdb17332" (UID: "dbb17c35-0ab0-4089-9847-d2acfdb17332"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.874137 4851 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.874166 4851 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.874176 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb17c35-0ab0-4089-9847-d2acfdb17332-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.874187 4851 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d374eb95-1961-4609-a5a8-3022938ceb54-var-run\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.874290 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d374eb95-1961-4609-a5a8-3022938ceb54" (UID: "d374eb95-1961-4609-a5a8-3022938ceb54"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.874504 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-scripts" (OuterVolumeSpecName: "scripts") pod "d374eb95-1961-4609-a5a8-3022938ceb54" (UID: "d374eb95-1961-4609-a5a8-3022938ceb54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.879510 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb17c35-0ab0-4089-9847-d2acfdb17332-kube-api-access-cjv92" (OuterVolumeSpecName: "kube-api-access-cjv92") pod "dbb17c35-0ab0-4089-9847-d2acfdb17332" (UID: "dbb17c35-0ab0-4089-9847-d2acfdb17332"). InnerVolumeSpecName "kube-api-access-cjv92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.888546 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d374eb95-1961-4609-a5a8-3022938ceb54-kube-api-access-gq24w" (OuterVolumeSpecName: "kube-api-access-gq24w") pod "d374eb95-1961-4609-a5a8-3022938ceb54" (UID: "d374eb95-1961-4609-a5a8-3022938ceb54"). InnerVolumeSpecName "kube-api-access-gq24w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.975481 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.975508 4851 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d374eb95-1961-4609-a5a8-3022938ceb54-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.975519 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq24w\" (UniqueName: \"kubernetes.io/projected/d374eb95-1961-4609-a5a8-3022938ceb54-kube-api-access-gq24w\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:15 crc kubenswrapper[4851]: I0223 13:26:15.975529 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjv92\" (UniqueName: \"kubernetes.io/projected/dbb17c35-0ab0-4089-9847-d2acfdb17332-kube-api-access-cjv92\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.076317 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.084393 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e3cd939-1e76-4a55-bb7b-614ae880e79c-etc-swift\") pod \"swift-storage-0\" (UID: \"7e3cd939-1e76-4a55-bb7b-614ae880e79c\") " pod="openstack/swift-storage-0" Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.175177 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2rf22" Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.307927 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.424544 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2rf22-config-lq74l" event={"ID":"d374eb95-1961-4609-a5a8-3022938ceb54","Type":"ContainerDied","Data":"c492b90df5e003565d6659aba558f3890f186a7e2449a66e1ce72a33f370c6c6"} Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.424842 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c492b90df5e003565d6659aba558f3890f186a7e2449a66e1ce72a33f370c6c6" Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.424633 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2rf22-config-lq74l" Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.456082 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tpxbb" event={"ID":"dbb17c35-0ab0-4089-9847-d2acfdb17332","Type":"ContainerDied","Data":"468dbf06d15e9d61eb9d02fd4b5b70f9f5186042db68de1e4eaf9590d7335613"} Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.456114 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="468dbf06d15e9d61eb9d02fd4b5b70f9f5186042db68de1e4eaf9590d7335613" Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.456201 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tpxbb" Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.787931 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2rf22-config-lq74l"] Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.794359 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2rf22-config-lq74l"] Feb 23 13:26:16 crc kubenswrapper[4851]: W0223 13:26:16.827468 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e3cd939_1e76_4a55_bb7b_614ae880e79c.slice/crio-c57d1a813578567cd5715db26d24cf3beb0ea3bc803c4257915ac2991fee5a86 WatchSource:0}: Error finding container c57d1a813578567cd5715db26d24cf3beb0ea3bc803c4257915ac2991fee5a86: Status 404 returned error can't find the container with id c57d1a813578567cd5715db26d24cf3beb0ea3bc803c4257915ac2991fee5a86 Feb 23 13:26:16 crc kubenswrapper[4851]: I0223 13:26:16.839535 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 13:26:17 crc kubenswrapper[4851]: I0223 13:26:17.466134 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"c57d1a813578567cd5715db26d24cf3beb0ea3bc803c4257915ac2991fee5a86"} Feb 23 13:26:17 crc kubenswrapper[4851]: I0223 13:26:17.978405 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d374eb95-1961-4609-a5a8-3022938ceb54" path="/var/lib/kubelet/pods/d374eb95-1961-4609-a5a8-3022938ceb54/volumes" Feb 23 13:26:18 crc kubenswrapper[4851]: I0223 13:26:18.474903 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"dcb834b450adc89bdd977587fe47a2e1b372dd68d1b24d6fb473bd3698205ee5"} Feb 23 13:26:18 crc kubenswrapper[4851]: I0223 13:26:18.474944 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"377682ca4fa1b1a0e44533425ede7e854b476adcae21a27cc788aa89dcd5f468"} Feb 23 13:26:18 crc kubenswrapper[4851]: I0223 13:26:18.474967 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"ecbf680c0734e0c90e92a8ed79c443281fb3544ede12638c45ef005332a5b8d7"} Feb 23 13:26:19 crc kubenswrapper[4851]: I0223 13:26:19.484623 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"b443438af083451e2c7d6945444d076fc98528771abec0cd974f5ed33dbf7f4d"} Feb 23 13:26:20 crc kubenswrapper[4851]: I0223 13:26:20.492808 4851 generic.go:334] "Generic (PLEG): container finished" podID="b07f3810-fe79-4343-a6c7-0fa6e0281b2d" containerID="2a1bc1ca0bfea6d912442719076208c8c61f56dc08b5936b91919337e368c305" exitCode=0 Feb 23 13:26:20 crc kubenswrapper[4851]: I0223 13:26:20.492875 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6sw4p" event={"ID":"b07f3810-fe79-4343-a6c7-0fa6e0281b2d","Type":"ContainerDied","Data":"2a1bc1ca0bfea6d912442719076208c8c61f56dc08b5936b91919337e368c305"} Feb 23 13:26:20 crc kubenswrapper[4851]: I0223 13:26:20.497097 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"1ef6184dbec5c3e2917f7205e7d0cadb133375eb4642acfc70aa42fdb2cdfa0f"} Feb 23 13:26:20 crc kubenswrapper[4851]: I0223 13:26:20.497141 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"b50512e55e398cc89534a747259f3d9252418795371fb503ae31abdeb1304624"} Feb 23 13:26:20 crc kubenswrapper[4851]: I0223 13:26:20.497152 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"1f34f85d1440f495d6c501a40a8e661287be1bdd723bb5e3deba1879ae8ade61"} Feb 23 13:26:20 crc kubenswrapper[4851]: I0223 13:26:20.497162 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"21f799be71cfbab6ee72272b530054d6e93a13de14d52371f7f42bc86c962bf4"} Feb 23 13:26:21 crc kubenswrapper[4851]: I0223 13:26:21.509637 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"a1f10f8cdd636f1c04b0bdb84c596e2351b510ca213995102250042dbebdefd4"} Feb 23 13:26:21 crc kubenswrapper[4851]: I0223 13:26:21.822904 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:21 crc kubenswrapper[4851]: I0223 13:26:21.965683 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cs4c\" (UniqueName: \"kubernetes.io/projected/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-kube-api-access-9cs4c\") pod \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " Feb 23 13:26:21 crc kubenswrapper[4851]: I0223 13:26:21.965823 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-combined-ca-bundle\") pod \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " Feb 23 13:26:21 crc kubenswrapper[4851]: I0223 13:26:21.965936 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-db-sync-config-data\") pod \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " Feb 23 13:26:21 crc kubenswrapper[4851]: I0223 13:26:21.966012 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-config-data\") pod \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\" (UID: \"b07f3810-fe79-4343-a6c7-0fa6e0281b2d\") " Feb 23 13:26:21 crc kubenswrapper[4851]: I0223 13:26:21.970402 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-kube-api-access-9cs4c" (OuterVolumeSpecName: "kube-api-access-9cs4c") pod "b07f3810-fe79-4343-a6c7-0fa6e0281b2d" (UID: "b07f3810-fe79-4343-a6c7-0fa6e0281b2d"). InnerVolumeSpecName "kube-api-access-9cs4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:21 crc kubenswrapper[4851]: I0223 13:26:21.979719 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b07f3810-fe79-4343-a6c7-0fa6e0281b2d" (UID: "b07f3810-fe79-4343-a6c7-0fa6e0281b2d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.006550 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b07f3810-fe79-4343-a6c7-0fa6e0281b2d" (UID: "b07f3810-fe79-4343-a6c7-0fa6e0281b2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.021523 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-config-data" (OuterVolumeSpecName: "config-data") pod "b07f3810-fe79-4343-a6c7-0fa6e0281b2d" (UID: "b07f3810-fe79-4343-a6c7-0fa6e0281b2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.068316 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.068377 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cs4c\" (UniqueName: \"kubernetes.io/projected/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-kube-api-access-9cs4c\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.068393 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.068408 4851 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b07f3810-fe79-4343-a6c7-0fa6e0281b2d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.529802 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"52e86c5ca632219136a74dd1cb9e9f9838d4c9d3022c5cc29f7fcd8ed58d610f"} Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.531567 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"36648ca92023dd50caa4492a8d066298fe8a7dffa642de14f75393f35b09a00b"} Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.531654 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"72aa5a19cf478679021f9012a5114a7c62abc222a78037f6a8f55334e2d18b53"} Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.531723 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"74eb98e6e275433cb7bcfb6dffe1858e39451470692d1fb583a82f31f0d88afe"} Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.531785 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"24c859bdc2680600cf0d51bea28e098904df77bc49003ce2c345189a8f04c5b9"} Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.532953 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6sw4p" event={"ID":"b07f3810-fe79-4343-a6c7-0fa6e0281b2d","Type":"ContainerDied","Data":"3dff08d46478b1aed53989efdc16233cf410e44e51346ee0a0e08b48e970c83d"} Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.533021 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dff08d46478b1aed53989efdc16233cf410e44e51346ee0a0e08b48e970c83d" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.533063 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6sw4p" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.941040 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-lltwd"] Feb 23 13:26:22 crc kubenswrapper[4851]: E0223 13:26:22.947957 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d374eb95-1961-4609-a5a8-3022938ceb54" containerName="ovn-config" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.948097 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="d374eb95-1961-4609-a5a8-3022938ceb54" containerName="ovn-config" Feb 23 13:26:22 crc kubenswrapper[4851]: E0223 13:26:22.948176 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb17c35-0ab0-4089-9847-d2acfdb17332" containerName="mariadb-account-create-update" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.948239 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb17c35-0ab0-4089-9847-d2acfdb17332" containerName="mariadb-account-create-update" Feb 23 13:26:22 crc kubenswrapper[4851]: E0223 13:26:22.948306 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07f3810-fe79-4343-a6c7-0fa6e0281b2d" containerName="glance-db-sync" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.948391 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07f3810-fe79-4343-a6c7-0fa6e0281b2d" containerName="glance-db-sync" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.948716 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="d374eb95-1961-4609-a5a8-3022938ceb54" containerName="ovn-config" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.948804 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07f3810-fe79-4343-a6c7-0fa6e0281b2d" containerName="glance-db-sync" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.948881 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb17c35-0ab0-4089-9847-d2acfdb17332" containerName="mariadb-account-create-update" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.950004 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:22 crc kubenswrapper[4851]: I0223 13:26:22.958289 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-lltwd"] Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.089864 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.089931 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.090127 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.090316 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-config\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.090611 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th96t\" (UniqueName: \"kubernetes.io/projected/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-kube-api-access-th96t\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.192180 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th96t\" (UniqueName: \"kubernetes.io/projected/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-kube-api-access-th96t\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.192273 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.192297 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.192348 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.192383 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-config\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.193427 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-config\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.193681 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.194088 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.194444 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.234423 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th96t\" (UniqueName: \"kubernetes.io/projected/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-kube-api-access-th96t\") pod \"dnsmasq-dns-5b946c75cc-lltwd\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.265765 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.553606 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e3cd939-1e76-4a55-bb7b-614ae880e79c","Type":"ContainerStarted","Data":"36ca0ac67e948fc402566fbcb9c7b1ca40047963e812f04f0500eecb485140e3"} Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.605526 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.136947867 podStartE2EDuration="40.605504756s" podCreationTimestamp="2026-02-23 13:25:43 +0000 UTC" firstStartedPulling="2026-02-23 13:26:16.829464882 +0000 UTC m=+1131.511168560" lastFinishedPulling="2026-02-23 13:26:21.298021771 +0000 UTC m=+1135.979725449" observedRunningTime="2026-02-23 13:26:23.599027022 +0000 UTC m=+1138.280730710" watchObservedRunningTime="2026-02-23 13:26:23.605504756 +0000 UTC m=+1138.287208434" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.737233 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-lltwd"] Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.900406 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-lltwd"] Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.931304 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rpncf"] Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.932588 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.934545 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 23 13:26:23 crc kubenswrapper[4851]: I0223 13:26:23.961774 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rpncf"] Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.113396 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.113450 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.114128 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-config\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.114276 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dx9n\" (UniqueName: \"kubernetes.io/projected/f5fa0050-b730-4479-8add-4c4212c014d1-kube-api-access-9dx9n\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.114357 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.114538 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.216116 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.216222 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.216250 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.216284 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-config\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.216308 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dx9n\" (UniqueName: \"kubernetes.io/projected/f5fa0050-b730-4479-8add-4c4212c014d1-kube-api-access-9dx9n\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.216348 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.217316 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.217372 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.217517 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.217826 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.218306 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-config\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.234044 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dx9n\" (UniqueName: \"kubernetes.io/projected/f5fa0050-b730-4479-8add-4c4212c014d1-kube-api-access-9dx9n\") pod \"dnsmasq-dns-74f6bcbc87-rpncf\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.256271 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.562310 4851 generic.go:334] "Generic (PLEG): container finished" podID="5c5e8318-82ad-4af0-a0a4-16340e7dd56f" containerID="29cd4eb3001382d4da8922aafc27476896b5ae60c8954d470e872f9b15f82fb6" exitCode=0 Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.562367 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" event={"ID":"5c5e8318-82ad-4af0-a0a4-16340e7dd56f","Type":"ContainerDied","Data":"29cd4eb3001382d4da8922aafc27476896b5ae60c8954d470e872f9b15f82fb6"} Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.562630 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" event={"ID":"5c5e8318-82ad-4af0-a0a4-16340e7dd56f","Type":"ContainerStarted","Data":"a0addaff5309bb0123dacf70ebbf4cefb7f6d67d1820a7d03a7471e6ab3f87ab"} Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.743614 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rpncf"] Feb 23 13:26:24 crc kubenswrapper[4851]: I0223 13:26:24.995116 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.130576 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-sb\") pod \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.130663 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-config\") pod \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.130715 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-nb\") pod \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.130747 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-dns-svc\") pod \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.130803 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th96t\" (UniqueName: \"kubernetes.io/projected/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-kube-api-access-th96t\") pod \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\" (UID: \"5c5e8318-82ad-4af0-a0a4-16340e7dd56f\") " Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.136986 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-kube-api-access-th96t" (OuterVolumeSpecName: "kube-api-access-th96t") pod "5c5e8318-82ad-4af0-a0a4-16340e7dd56f" (UID: "5c5e8318-82ad-4af0-a0a4-16340e7dd56f"). InnerVolumeSpecName "kube-api-access-th96t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.150017 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c5e8318-82ad-4af0-a0a4-16340e7dd56f" (UID: "5c5e8318-82ad-4af0-a0a4-16340e7dd56f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.153232 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c5e8318-82ad-4af0-a0a4-16340e7dd56f" (UID: "5c5e8318-82ad-4af0-a0a4-16340e7dd56f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.154760 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-config" (OuterVolumeSpecName: "config") pod "5c5e8318-82ad-4af0-a0a4-16340e7dd56f" (UID: "5c5e8318-82ad-4af0-a0a4-16340e7dd56f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.156057 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c5e8318-82ad-4af0-a0a4-16340e7dd56f" (UID: "5c5e8318-82ad-4af0-a0a4-16340e7dd56f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.232621 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.232653 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.232663 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.232672 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.232680 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th96t\" (UniqueName: \"kubernetes.io/projected/5c5e8318-82ad-4af0-a0a4-16340e7dd56f-kube-api-access-th96t\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.571353 4851 generic.go:334] "Generic (PLEG): container finished" podID="f5fa0050-b730-4479-8add-4c4212c014d1" containerID="144864ee25d981e0939e4e111e3d259ce448d9979d8e8071600a1273bf662d32" exitCode=0 Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.571432 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" event={"ID":"f5fa0050-b730-4479-8add-4c4212c014d1","Type":"ContainerDied","Data":"144864ee25d981e0939e4e111e3d259ce448d9979d8e8071600a1273bf662d32"} Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.571458 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" event={"ID":"f5fa0050-b730-4479-8add-4c4212c014d1","Type":"ContainerStarted","Data":"c98e768cf0a1df4a88f8e11f8f46a81f7dac8fdefc79dbf349fd2511cb86f91a"} Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.573475 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" event={"ID":"5c5e8318-82ad-4af0-a0a4-16340e7dd56f","Type":"ContainerDied","Data":"a0addaff5309bb0123dacf70ebbf4cefb7f6d67d1820a7d03a7471e6ab3f87ab"} Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.573516 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-lltwd" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.573533 4851 scope.go:117] "RemoveContainer" containerID="29cd4eb3001382d4da8922aafc27476896b5ae60c8954d470e872f9b15f82fb6" Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.761878 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-lltwd"] Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.767262 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-lltwd"] Feb 23 13:26:25 crc kubenswrapper[4851]: I0223 13:26:25.993350 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5e8318-82ad-4af0-a0a4-16340e7dd56f" path="/var/lib/kubelet/pods/5c5e8318-82ad-4af0-a0a4-16340e7dd56f/volumes" Feb 23 13:26:26 crc kubenswrapper[4851]: I0223 13:26:26.582986 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" event={"ID":"f5fa0050-b730-4479-8add-4c4212c014d1","Type":"ContainerStarted","Data":"511c696e9e1cf6fe368eb437386798f75a8acc628866f6473d06ea297f5e5646"} Feb 23 13:26:26 crc kubenswrapper[4851]: I0223 13:26:26.584095 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:26 crc kubenswrapper[4851]: I0223 13:26:26.602355 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" podStartSLOduration=3.602322135 podStartE2EDuration="3.602322135s" podCreationTimestamp="2026-02-23 13:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:26:26.598998331 +0000 UTC m=+1141.280702019" watchObservedRunningTime="2026-02-23 13:26:26.602322135 +0000 UTC m=+1141.284025813" Feb 23 13:26:27 crc kubenswrapper[4851]: I0223 13:26:27.678956 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 13:26:27 crc kubenswrapper[4851]: I0223 13:26:27.985539 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.114536 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-pqvpq"] Feb 23 13:26:28 crc kubenswrapper[4851]: E0223 13:26:28.114857 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5e8318-82ad-4af0-a0a4-16340e7dd56f" containerName="init" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.114872 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5e8318-82ad-4af0-a0a4-16340e7dd56f" containerName="init" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.115023 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5e8318-82ad-4af0-a0a4-16340e7dd56f" containerName="init" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.115535 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.127195 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pqvpq"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.244582 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-044e-account-create-update-hq2hn"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.245607 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.247361 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.266266 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-044e-account-create-update-hq2hn"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.279406 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e353fec8-4196-4baa-8f02-878651e9bcc5-operator-scripts\") pod \"cinder-db-create-pqvpq\" (UID: \"e353fec8-4196-4baa-8f02-878651e9bcc5\") " pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.279644 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grtlf\" (UniqueName: \"kubernetes.io/projected/e353fec8-4196-4baa-8f02-878651e9bcc5-kube-api-access-grtlf\") pod \"cinder-db-create-pqvpq\" (UID: \"e353fec8-4196-4baa-8f02-878651e9bcc5\") " pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.311898 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7dwr5"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.312841 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.321956 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0f44-account-create-update-ptrt5"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.323208 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.325432 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.331482 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7dwr5"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.374796 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0f44-account-create-update-ptrt5"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.389791 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-operator-scripts\") pod \"barbican-044e-account-create-update-hq2hn\" (UID: \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\") " pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.390052 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grtlf\" (UniqueName: \"kubernetes.io/projected/e353fec8-4196-4baa-8f02-878651e9bcc5-kube-api-access-grtlf\") pod \"cinder-db-create-pqvpq\" (UID: \"e353fec8-4196-4baa-8f02-878651e9bcc5\") " pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.390542 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e353fec8-4196-4baa-8f02-878651e9bcc5-operator-scripts\") pod \"cinder-db-create-pqvpq\" (UID: \"e353fec8-4196-4baa-8f02-878651e9bcc5\") " pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.401991 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52qdk\" (UniqueName: \"kubernetes.io/projected/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-kube-api-access-52qdk\") pod \"barbican-044e-account-create-update-hq2hn\" (UID: \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\") " pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.401799 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e353fec8-4196-4baa-8f02-878651e9bcc5-operator-scripts\") pod \"cinder-db-create-pqvpq\" (UID: \"e353fec8-4196-4baa-8f02-878651e9bcc5\") " pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.417987 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ht4d4"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.419988 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.423346 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grtlf\" (UniqueName: \"kubernetes.io/projected/e353fec8-4196-4baa-8f02-878651e9bcc5-kube-api-access-grtlf\") pod \"cinder-db-create-pqvpq\" (UID: \"e353fec8-4196-4baa-8f02-878651e9bcc5\") " pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.426290 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.426586 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4gbxh" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.426651 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.426730 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-x9z4v"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.427008 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.427705 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.434501 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.437296 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ht4d4"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.443459 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x9z4v"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.503794 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52qdk\" (UniqueName: \"kubernetes.io/projected/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-kube-api-access-52qdk\") pod \"barbican-044e-account-create-update-hq2hn\" (UID: \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\") " pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.503877 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-operator-scripts\") pod \"cinder-0f44-account-create-update-ptrt5\" (UID: \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\") " pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.503911 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-operator-scripts\") pod \"barbican-044e-account-create-update-hq2hn\" (UID: \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\") " pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.503958 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r97x2\" (UniqueName: \"kubernetes.io/projected/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-kube-api-access-r97x2\") pod \"cinder-0f44-account-create-update-ptrt5\" (UID: \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\") " pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.504010 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-operator-scripts\") pod \"barbican-db-create-7dwr5\" (UID: \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\") " pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.504074 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbm85\" (UniqueName: \"kubernetes.io/projected/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-kube-api-access-nbm85\") pod \"barbican-db-create-7dwr5\" (UID: \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\") " pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.505315 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-operator-scripts\") pod \"barbican-044e-account-create-update-hq2hn\" (UID: \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\") " pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.530029 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52qdk\" (UniqueName: \"kubernetes.io/projected/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-kube-api-access-52qdk\") pod \"barbican-044e-account-create-update-hq2hn\" (UID: \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\") " pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.562037 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.622354 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrt5m\" (UniqueName: \"kubernetes.io/projected/3ec82359-d307-4ee4-8dc7-a9db6d393244-kube-api-access-rrt5m\") pod \"neutron-db-create-x9z4v\" (UID: \"3ec82359-d307-4ee4-8dc7-a9db6d393244\") " pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.622410 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec82359-d307-4ee4-8dc7-a9db6d393244-operator-scripts\") pod \"neutron-db-create-x9z4v\" (UID: \"3ec82359-d307-4ee4-8dc7-a9db6d393244\") " pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.622444 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-operator-scripts\") pod \"barbican-db-create-7dwr5\" (UID: \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\") " pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.622498 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-config-data\") pod \"keystone-db-sync-ht4d4\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.622553 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbm85\" (UniqueName: \"kubernetes.io/projected/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-kube-api-access-nbm85\") pod \"barbican-db-create-7dwr5\" (UID: \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\") " pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.622653 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-combined-ca-bundle\") pod \"keystone-db-sync-ht4d4\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.622685 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-operator-scripts\") pod \"cinder-0f44-account-create-update-ptrt5\" (UID: \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\") " pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.622715 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nczd5\" (UniqueName: \"kubernetes.io/projected/8776d848-0f18-44d4-9eaf-3108ca8a79bd-kube-api-access-nczd5\") pod \"keystone-db-sync-ht4d4\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.622780 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r97x2\" (UniqueName: \"kubernetes.io/projected/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-kube-api-access-r97x2\") pod \"cinder-0f44-account-create-update-ptrt5\" (UID: \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\") " pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.624263 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-operator-scripts\") pod \"cinder-0f44-account-create-update-ptrt5\" (UID: \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\") " pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.624749 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-operator-scripts\") pod \"barbican-db-create-7dwr5\" (UID: \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\") " pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.635175 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7231-account-create-update-whd9p"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.636573 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.648555 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.654440 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbm85\" (UniqueName: \"kubernetes.io/projected/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-kube-api-access-nbm85\") pod \"barbican-db-create-7dwr5\" (UID: \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\") " pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.658084 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7231-account-create-update-whd9p"] Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.662250 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r97x2\" (UniqueName: \"kubernetes.io/projected/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-kube-api-access-r97x2\") pod \"cinder-0f44-account-create-update-ptrt5\" (UID: \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\") " pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.724077 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrt5m\" (UniqueName: \"kubernetes.io/projected/3ec82359-d307-4ee4-8dc7-a9db6d393244-kube-api-access-rrt5m\") pod \"neutron-db-create-x9z4v\" (UID: \"3ec82359-d307-4ee4-8dc7-a9db6d393244\") " pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.724860 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec82359-d307-4ee4-8dc7-a9db6d393244-operator-scripts\") pod \"neutron-db-create-x9z4v\" (UID: \"3ec82359-d307-4ee4-8dc7-a9db6d393244\") " pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.724924 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-config-data\") pod \"keystone-db-sync-ht4d4\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.724994 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88533393-d631-48b0-b09f-883391965b09-operator-scripts\") pod \"neutron-7231-account-create-update-whd9p\" (UID: \"88533393-d631-48b0-b09f-883391965b09\") " pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.725031 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z4kp\" (UniqueName: \"kubernetes.io/projected/88533393-d631-48b0-b09f-883391965b09-kube-api-access-2z4kp\") pod \"neutron-7231-account-create-update-whd9p\" (UID: \"88533393-d631-48b0-b09f-883391965b09\") " pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.725093 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-combined-ca-bundle\") pod \"keystone-db-sync-ht4d4\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.725129 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nczd5\" (UniqueName: \"kubernetes.io/projected/8776d848-0f18-44d4-9eaf-3108ca8a79bd-kube-api-access-nczd5\") pod \"keystone-db-sync-ht4d4\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.726620 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec82359-d307-4ee4-8dc7-a9db6d393244-operator-scripts\") pod \"neutron-db-create-x9z4v\" (UID: \"3ec82359-d307-4ee4-8dc7-a9db6d393244\") " pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.733742 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-config-data\") pod \"keystone-db-sync-ht4d4\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.742430 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-combined-ca-bundle\") pod \"keystone-db-sync-ht4d4\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.745535 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrt5m\" (UniqueName: \"kubernetes.io/projected/3ec82359-d307-4ee4-8dc7-a9db6d393244-kube-api-access-rrt5m\") pod \"neutron-db-create-x9z4v\" (UID: \"3ec82359-d307-4ee4-8dc7-a9db6d393244\") " pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.752802 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nczd5\" (UniqueName: \"kubernetes.io/projected/8776d848-0f18-44d4-9eaf-3108ca8a79bd-kube-api-access-nczd5\") pod \"keystone-db-sync-ht4d4\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.826950 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88533393-d631-48b0-b09f-883391965b09-operator-scripts\") pod \"neutron-7231-account-create-update-whd9p\" (UID: \"88533393-d631-48b0-b09f-883391965b09\") " pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.827007 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z4kp\" (UniqueName: \"kubernetes.io/projected/88533393-d631-48b0-b09f-883391965b09-kube-api-access-2z4kp\") pod \"neutron-7231-account-create-update-whd9p\" (UID: \"88533393-d631-48b0-b09f-883391965b09\") " pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.827687 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88533393-d631-48b0-b09f-883391965b09-operator-scripts\") pod \"neutron-7231-account-create-update-whd9p\" (UID: \"88533393-d631-48b0-b09f-883391965b09\") " pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.842486 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z4kp\" (UniqueName: \"kubernetes.io/projected/88533393-d631-48b0-b09f-883391965b09-kube-api-access-2z4kp\") pod \"neutron-7231-account-create-update-whd9p\" (UID: \"88533393-d631-48b0-b09f-883391965b09\") " pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.896083 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.934844 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.955679 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.958752 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:28 crc kubenswrapper[4851]: I0223 13:26:28.958789 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:29 crc kubenswrapper[4851]: I0223 13:26:29.030471 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-044e-account-create-update-hq2hn"] Feb 23 13:26:29 crc kubenswrapper[4851]: I0223 13:26:29.052170 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-pqvpq"] Feb 23 13:26:29 crc kubenswrapper[4851]: I0223 13:26:29.635143 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pqvpq" event={"ID":"e353fec8-4196-4baa-8f02-878651e9bcc5","Type":"ContainerStarted","Data":"3fb839e554007a4882d0f8a62561fb488e392ba5f38d75e054a217a49deb7f6b"} Feb 23 13:26:29 crc kubenswrapper[4851]: I0223 13:26:29.647058 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-044e-account-create-update-hq2hn" event={"ID":"39ef6fd7-8171-4e6b-9cda-5b8610248ca2","Type":"ContainerStarted","Data":"636b16190409bedf3834cceb98a56c38d440911b575307a4475a7079f6d94445"} Feb 23 13:26:29 crc kubenswrapper[4851]: I0223 13:26:29.655349 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-x9z4v"] Feb 23 13:26:29 crc kubenswrapper[4851]: I0223 13:26:29.741403 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ht4d4"] Feb 23 13:26:29 crc kubenswrapper[4851]: I0223 13:26:29.755777 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7dwr5"] Feb 23 13:26:29 crc kubenswrapper[4851]: I0223 13:26:29.816497 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0f44-account-create-update-ptrt5"] Feb 23 13:26:29 crc kubenswrapper[4851]: I0223 13:26:29.826419 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7231-account-create-update-whd9p"] Feb 23 13:26:30 crc kubenswrapper[4851]: I0223 13:26:30.653881 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x9z4v" event={"ID":"3ec82359-d307-4ee4-8dc7-a9db6d393244","Type":"ContainerStarted","Data":"2728bb6ba55885921f982bb80b8618e17d540eb4aa59c0136853f38003125b0a"} Feb 23 13:26:32 crc kubenswrapper[4851]: W0223 13:26:32.134168 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86b5ffac_c8d0_4b5c_9f28_daba9dfb6be4.slice/crio-8fb94f65ef882430781fd6f8f1e654a6e1e5af97b451804ee287d866200c1a6f WatchSource:0}: Error finding container 8fb94f65ef882430781fd6f8f1e654a6e1e5af97b451804ee287d866200c1a6f: Status 404 returned error can't find the container with id 8fb94f65ef882430781fd6f8f1e654a6e1e5af97b451804ee287d866200c1a6f Feb 23 13:26:32 crc kubenswrapper[4851]: W0223 13:26:32.139827 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88533393_d631_48b0_b09f_883391965b09.slice/crio-22a44c30dd6997b496421bf1569cd53f4a31de101e635063628b228b05c368f9 WatchSource:0}: Error finding container 22a44c30dd6997b496421bf1569cd53f4a31de101e635063628b228b05c368f9: Status 404 returned error can't find the container with id 22a44c30dd6997b496421bf1569cd53f4a31de101e635063628b228b05c368f9 Feb 23 13:26:32 crc kubenswrapper[4851]: W0223 13:26:32.143103 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae4a32c_745e_4a9e_a3ca_226b8890d6ad.slice/crio-d1472f0faf691b153594564bfed17a8ffa45a7cb94c5c385e413a997a8567c3a WatchSource:0}: Error finding container d1472f0faf691b153594564bfed17a8ffa45a7cb94c5c385e413a997a8567c3a: Status 404 returned error can't find the container with id d1472f0faf691b153594564bfed17a8ffa45a7cb94c5c385e413a997a8567c3a Feb 23 13:26:32 crc kubenswrapper[4851]: I0223 13:26:32.669795 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0f44-account-create-update-ptrt5" event={"ID":"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad","Type":"ContainerStarted","Data":"d1472f0faf691b153594564bfed17a8ffa45a7cb94c5c385e413a997a8567c3a"} Feb 23 13:26:32 crc kubenswrapper[4851]: I0223 13:26:32.670904 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ht4d4" event={"ID":"8776d848-0f18-44d4-9eaf-3108ca8a79bd","Type":"ContainerStarted","Data":"01da2bfdb9137c06ca5800e38c0eb7e3ab68508d4a8f7e93f0268bed9a72608b"} Feb 23 13:26:32 crc kubenswrapper[4851]: I0223 13:26:32.671878 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7dwr5" event={"ID":"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4","Type":"ContainerStarted","Data":"8fb94f65ef882430781fd6f8f1e654a6e1e5af97b451804ee287d866200c1a6f"} Feb 23 13:26:32 crc kubenswrapper[4851]: I0223 13:26:32.672821 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7231-account-create-update-whd9p" event={"ID":"88533393-d631-48b0-b09f-883391965b09","Type":"ContainerStarted","Data":"22a44c30dd6997b496421bf1569cd53f4a31de101e635063628b228b05c368f9"} Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.683444 4851 generic.go:334] "Generic (PLEG): container finished" podID="39ef6fd7-8171-4e6b-9cda-5b8610248ca2" containerID="5718f6e2fef8f88ecee6191f90308c1e6f1cb1486ad95ab8816915b6dda426e3" exitCode=0 Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.683518 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-044e-account-create-update-hq2hn" event={"ID":"39ef6fd7-8171-4e6b-9cda-5b8610248ca2","Type":"ContainerDied","Data":"5718f6e2fef8f88ecee6191f90308c1e6f1cb1486ad95ab8816915b6dda426e3"} Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.685746 4851 generic.go:334] "Generic (PLEG): container finished" podID="86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4" containerID="5882c447bf0593a3ea784c253c941de4ecbf92fa9266af3f98a34603d92b1587" exitCode=0 Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.685818 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7dwr5" event={"ID":"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4","Type":"ContainerDied","Data":"5882c447bf0593a3ea784c253c941de4ecbf92fa9266af3f98a34603d92b1587"} Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.687597 4851 generic.go:334] "Generic (PLEG): container finished" podID="88533393-d631-48b0-b09f-883391965b09" containerID="bce5cc185b8b88d0a15a53cc561ccf2680545c1dfe57fde981752850e951b964" exitCode=0 Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.687642 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7231-account-create-update-whd9p" event={"ID":"88533393-d631-48b0-b09f-883391965b09","Type":"ContainerDied","Data":"bce5cc185b8b88d0a15a53cc561ccf2680545c1dfe57fde981752850e951b964"} Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.689145 4851 generic.go:334] "Generic (PLEG): container finished" podID="e353fec8-4196-4baa-8f02-878651e9bcc5" containerID="a1044a55accbb003d6b23f2e7fa3946ae583b1836f9deb04a07b56bc5e165cd9" exitCode=0 Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.689185 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pqvpq" event={"ID":"e353fec8-4196-4baa-8f02-878651e9bcc5","Type":"ContainerDied","Data":"a1044a55accbb003d6b23f2e7fa3946ae583b1836f9deb04a07b56bc5e165cd9"} Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.690280 4851 generic.go:334] "Generic (PLEG): container finished" podID="0ae4a32c-745e-4a9e-a3ca-226b8890d6ad" containerID="88db0e3e152e2e59564c5dc81e3ba47be96cb8309af6bb7315e5d46e1cb7cdb8" exitCode=0 Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.690320 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0f44-account-create-update-ptrt5" event={"ID":"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad","Type":"ContainerDied","Data":"88db0e3e152e2e59564c5dc81e3ba47be96cb8309af6bb7315e5d46e1cb7cdb8"} Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.694693 4851 generic.go:334] "Generic (PLEG): container finished" podID="3ec82359-d307-4ee4-8dc7-a9db6d393244" containerID="9e6bba80d0fbbaf71bff771b064c0e300a0d85f8f12f445cee50ec5624277b3b" exitCode=0 Feb 23 13:26:33 crc kubenswrapper[4851]: I0223 13:26:33.694759 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x9z4v" event={"ID":"3ec82359-d307-4ee4-8dc7-a9db6d393244","Type":"ContainerDied","Data":"9e6bba80d0fbbaf71bff771b064c0e300a0d85f8f12f445cee50ec5624277b3b"} Feb 23 13:26:34 crc kubenswrapper[4851]: I0223 13:26:34.257511 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:26:34 crc kubenswrapper[4851]: I0223 13:26:34.308719 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-29d8z"] Feb 23 13:26:34 crc kubenswrapper[4851]: I0223 13:26:34.308944 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-29d8z" podUID="1c03be1e-abc1-4289-b6dc-ba4b3ac70614" containerName="dnsmasq-dns" containerID="cri-o://5032e438bfae7ac60345d880dc3b07826033c3e1d71731017da340373b6a35df" gracePeriod=10 Feb 23 13:26:34 crc kubenswrapper[4851]: I0223 13:26:34.711883 4851 generic.go:334] "Generic (PLEG): container finished" podID="1c03be1e-abc1-4289-b6dc-ba4b3ac70614" containerID="5032e438bfae7ac60345d880dc3b07826033c3e1d71731017da340373b6a35df" exitCode=0 Feb 23 13:26:34 crc kubenswrapper[4851]: I0223 13:26:34.711960 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-29d8z" event={"ID":"1c03be1e-abc1-4289-b6dc-ba4b3ac70614","Type":"ContainerDied","Data":"5032e438bfae7ac60345d880dc3b07826033c3e1d71731017da340373b6a35df"} Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.296689 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.355413 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.366445 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.397091 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grtlf\" (UniqueName: \"kubernetes.io/projected/e353fec8-4196-4baa-8f02-878651e9bcc5-kube-api-access-grtlf\") pod \"e353fec8-4196-4baa-8f02-878651e9bcc5\" (UID: \"e353fec8-4196-4baa-8f02-878651e9bcc5\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.397210 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbm85\" (UniqueName: \"kubernetes.io/projected/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-kube-api-access-nbm85\") pod \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\" (UID: \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.397242 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-operator-scripts\") pod \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\" (UID: \"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.397269 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e353fec8-4196-4baa-8f02-878651e9bcc5-operator-scripts\") pod \"e353fec8-4196-4baa-8f02-878651e9bcc5\" (UID: \"e353fec8-4196-4baa-8f02-878651e9bcc5\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.397307 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z4kp\" (UniqueName: \"kubernetes.io/projected/88533393-d631-48b0-b09f-883391965b09-kube-api-access-2z4kp\") pod \"88533393-d631-48b0-b09f-883391965b09\" (UID: \"88533393-d631-48b0-b09f-883391965b09\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.397385 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88533393-d631-48b0-b09f-883391965b09-operator-scripts\") pod \"88533393-d631-48b0-b09f-883391965b09\" (UID: \"88533393-d631-48b0-b09f-883391965b09\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.397977 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88533393-d631-48b0-b09f-883391965b09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88533393-d631-48b0-b09f-883391965b09" (UID: "88533393-d631-48b0-b09f-883391965b09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.398013 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e353fec8-4196-4baa-8f02-878651e9bcc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e353fec8-4196-4baa-8f02-878651e9bcc5" (UID: "e353fec8-4196-4baa-8f02-878651e9bcc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.398129 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4" (UID: "86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.400112 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.401542 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e353fec8-4196-4baa-8f02-878651e9bcc5-kube-api-access-grtlf" (OuterVolumeSpecName: "kube-api-access-grtlf") pod "e353fec8-4196-4baa-8f02-878651e9bcc5" (UID: "e353fec8-4196-4baa-8f02-878651e9bcc5"). InnerVolumeSpecName "kube-api-access-grtlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.403066 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88533393-d631-48b0-b09f-883391965b09-kube-api-access-2z4kp" (OuterVolumeSpecName: "kube-api-access-2z4kp") pod "88533393-d631-48b0-b09f-883391965b09" (UID: "88533393-d631-48b0-b09f-883391965b09"). InnerVolumeSpecName "kube-api-access-2z4kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.403810 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-kube-api-access-nbm85" (OuterVolumeSpecName: "kube-api-access-nbm85") pod "86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4" (UID: "86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4"). InnerVolumeSpecName "kube-api-access-nbm85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.414267 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.427001 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.435415 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501242 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-operator-scripts\") pod \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\" (UID: \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501282 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec82359-d307-4ee4-8dc7-a9db6d393244-operator-scripts\") pod \"3ec82359-d307-4ee4-8dc7-a9db6d393244\" (UID: \"3ec82359-d307-4ee4-8dc7-a9db6d393244\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501338 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52qdk\" (UniqueName: \"kubernetes.io/projected/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-kube-api-access-52qdk\") pod \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\" (UID: \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501387 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-operator-scripts\") pod \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\" (UID: \"39ef6fd7-8171-4e6b-9cda-5b8610248ca2\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501437 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-sb\") pod \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501456 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-config\") pod \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501480 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrt5m\" (UniqueName: \"kubernetes.io/projected/3ec82359-d307-4ee4-8dc7-a9db6d393244-kube-api-access-rrt5m\") pod \"3ec82359-d307-4ee4-8dc7-a9db6d393244\" (UID: \"3ec82359-d307-4ee4-8dc7-a9db6d393244\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501532 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-dns-svc\") pod \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501548 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r97x2\" (UniqueName: \"kubernetes.io/projected/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-kube-api-access-r97x2\") pod \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\" (UID: \"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501574 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4s8f\" (UniqueName: \"kubernetes.io/projected/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-kube-api-access-f4s8f\") pod \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.501925 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39ef6fd7-8171-4e6b-9cda-5b8610248ca2" (UID: "39ef6fd7-8171-4e6b-9cda-5b8610248ca2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.502056 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec82359-d307-4ee4-8dc7-a9db6d393244-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ec82359-d307-4ee4-8dc7-a9db6d393244" (UID: "3ec82359-d307-4ee4-8dc7-a9db6d393244"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.502165 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-nb\") pod \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\" (UID: \"1c03be1e-abc1-4289-b6dc-ba4b3ac70614\") " Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.502238 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ae4a32c-745e-4a9e-a3ca-226b8890d6ad" (UID: "0ae4a32c-745e-4a9e-a3ca-226b8890d6ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.503745 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbm85\" (UniqueName: \"kubernetes.io/projected/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-kube-api-access-nbm85\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.503765 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.503776 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.503784 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec82359-d307-4ee4-8dc7-a9db6d393244-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.503792 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e353fec8-4196-4baa-8f02-878651e9bcc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.503799 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z4kp\" (UniqueName: \"kubernetes.io/projected/88533393-d631-48b0-b09f-883391965b09-kube-api-access-2z4kp\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.503808 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.503816 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88533393-d631-48b0-b09f-883391965b09-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.503823 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grtlf\" (UniqueName: \"kubernetes.io/projected/e353fec8-4196-4baa-8f02-878651e9bcc5-kube-api-access-grtlf\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.507065 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-kube-api-access-f4s8f" (OuterVolumeSpecName: "kube-api-access-f4s8f") pod "1c03be1e-abc1-4289-b6dc-ba4b3ac70614" (UID: "1c03be1e-abc1-4289-b6dc-ba4b3ac70614"). InnerVolumeSpecName "kube-api-access-f4s8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.507911 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec82359-d307-4ee4-8dc7-a9db6d393244-kube-api-access-rrt5m" (OuterVolumeSpecName: "kube-api-access-rrt5m") pod "3ec82359-d307-4ee4-8dc7-a9db6d393244" (UID: "3ec82359-d307-4ee4-8dc7-a9db6d393244"). InnerVolumeSpecName "kube-api-access-rrt5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.508917 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-kube-api-access-r97x2" (OuterVolumeSpecName: "kube-api-access-r97x2") pod "0ae4a32c-745e-4a9e-a3ca-226b8890d6ad" (UID: "0ae4a32c-745e-4a9e-a3ca-226b8890d6ad"). InnerVolumeSpecName "kube-api-access-r97x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.522713 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-kube-api-access-52qdk" (OuterVolumeSpecName: "kube-api-access-52qdk") pod "39ef6fd7-8171-4e6b-9cda-5b8610248ca2" (UID: "39ef6fd7-8171-4e6b-9cda-5b8610248ca2"). InnerVolumeSpecName "kube-api-access-52qdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.544883 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c03be1e-abc1-4289-b6dc-ba4b3ac70614" (UID: "1c03be1e-abc1-4289-b6dc-ba4b3ac70614"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.551419 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c03be1e-abc1-4289-b6dc-ba4b3ac70614" (UID: "1c03be1e-abc1-4289-b6dc-ba4b3ac70614"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.558682 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-config" (OuterVolumeSpecName: "config") pod "1c03be1e-abc1-4289-b6dc-ba4b3ac70614" (UID: "1c03be1e-abc1-4289-b6dc-ba4b3ac70614"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.564835 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c03be1e-abc1-4289-b6dc-ba4b3ac70614" (UID: "1c03be1e-abc1-4289-b6dc-ba4b3ac70614"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.605835 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.606115 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52qdk\" (UniqueName: \"kubernetes.io/projected/39ef6fd7-8171-4e6b-9cda-5b8610248ca2-kube-api-access-52qdk\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.606201 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.606268 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.606361 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrt5m\" (UniqueName: \"kubernetes.io/projected/3ec82359-d307-4ee4-8dc7-a9db6d393244-kube-api-access-rrt5m\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.606474 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.606543 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r97x2\" (UniqueName: \"kubernetes.io/projected/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad-kube-api-access-r97x2\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.606603 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4s8f\" (UniqueName: \"kubernetes.io/projected/1c03be1e-abc1-4289-b6dc-ba4b3ac70614-kube-api-access-f4s8f\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.740430 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7231-account-create-update-whd9p" event={"ID":"88533393-d631-48b0-b09f-883391965b09","Type":"ContainerDied","Data":"22a44c30dd6997b496421bf1569cd53f4a31de101e635063628b228b05c368f9"} Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.740467 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22a44c30dd6997b496421bf1569cd53f4a31de101e635063628b228b05c368f9" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.740528 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7231-account-create-update-whd9p" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.752266 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-pqvpq" event={"ID":"e353fec8-4196-4baa-8f02-878651e9bcc5","Type":"ContainerDied","Data":"3fb839e554007a4882d0f8a62561fb488e392ba5f38d75e054a217a49deb7f6b"} Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.752308 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fb839e554007a4882d0f8a62561fb488e392ba5f38d75e054a217a49deb7f6b" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.752418 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-pqvpq" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.755587 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-044e-account-create-update-hq2hn" event={"ID":"39ef6fd7-8171-4e6b-9cda-5b8610248ca2","Type":"ContainerDied","Data":"636b16190409bedf3834cceb98a56c38d440911b575307a4475a7079f6d94445"} Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.755625 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="636b16190409bedf3834cceb98a56c38d440911b575307a4475a7079f6d94445" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.755756 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-044e-account-create-update-hq2hn" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.757903 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0f44-account-create-update-ptrt5" event={"ID":"0ae4a32c-745e-4a9e-a3ca-226b8890d6ad","Type":"ContainerDied","Data":"d1472f0faf691b153594564bfed17a8ffa45a7cb94c5c385e413a997a8567c3a"} Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.757942 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1472f0faf691b153594564bfed17a8ffa45a7cb94c5c385e413a997a8567c3a" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.757992 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0f44-account-create-update-ptrt5" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.767746 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-29d8z" event={"ID":"1c03be1e-abc1-4289-b6dc-ba4b3ac70614","Type":"ContainerDied","Data":"9fab3f06c2a503c612e232a4d1c511dafecd5e27da67591dcc6bc42f16ced5aa"} Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.767804 4851 scope.go:117] "RemoveContainer" containerID="5032e438bfae7ac60345d880dc3b07826033c3e1d71731017da340373b6a35df" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.767926 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-29d8z" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.774122 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ht4d4" event={"ID":"8776d848-0f18-44d4-9eaf-3108ca8a79bd","Type":"ContainerStarted","Data":"34850fa7ee6e512ffa8a54c6231e9350a12f99890715d0787beafd198f85f558"} Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.776191 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7dwr5" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.777001 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7dwr5" event={"ID":"86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4","Type":"ContainerDied","Data":"8fb94f65ef882430781fd6f8f1e654a6e1e5af97b451804ee287d866200c1a6f"} Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.777029 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb94f65ef882430781fd6f8f1e654a6e1e5af97b451804ee287d866200c1a6f" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.788778 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-x9z4v" event={"ID":"3ec82359-d307-4ee4-8dc7-a9db6d393244","Type":"ContainerDied","Data":"2728bb6ba55885921f982bb80b8618e17d540eb4aa59c0136853f38003125b0a"} Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.788821 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2728bb6ba55885921f982bb80b8618e17d540eb4aa59c0136853f38003125b0a" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.788988 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-x9z4v" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.806952 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ht4d4" podStartSLOduration=4.855305574 podStartE2EDuration="9.806934081s" podCreationTimestamp="2026-02-23 13:26:28 +0000 UTC" firstStartedPulling="2026-02-23 13:26:32.134054898 +0000 UTC m=+1146.815758576" lastFinishedPulling="2026-02-23 13:26:37.085683405 +0000 UTC m=+1151.767387083" observedRunningTime="2026-02-23 13:26:37.801094856 +0000 UTC m=+1152.482798544" watchObservedRunningTime="2026-02-23 13:26:37.806934081 +0000 UTC m=+1152.488637759" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.809961 4851 scope.go:117] "RemoveContainer" containerID="55773d1c15f561544ec1ce0e581cf7ab8ac6bdab69dcff565d5712486a9947e2" Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.831889 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-29d8z"] Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.838088 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-29d8z"] Feb 23 13:26:37 crc kubenswrapper[4851]: I0223 13:26:37.978709 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c03be1e-abc1-4289-b6dc-ba4b3ac70614" path="/var/lib/kubelet/pods/1c03be1e-abc1-4289-b6dc-ba4b3ac70614/volumes" Feb 23 13:26:40 crc kubenswrapper[4851]: I0223 13:26:40.815152 4851 generic.go:334] "Generic (PLEG): container finished" podID="8776d848-0f18-44d4-9eaf-3108ca8a79bd" containerID="34850fa7ee6e512ffa8a54c6231e9350a12f99890715d0787beafd198f85f558" exitCode=0 Feb 23 13:26:40 crc kubenswrapper[4851]: I0223 13:26:40.815204 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ht4d4" event={"ID":"8776d848-0f18-44d4-9eaf-3108ca8a79bd","Type":"ContainerDied","Data":"34850fa7ee6e512ffa8a54c6231e9350a12f99890715d0787beafd198f85f558"} Feb 23 13:26:41 crc kubenswrapper[4851]: I0223 13:26:41.926772 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:26:41 crc kubenswrapper[4851]: I0223 13:26:41.926834 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:26:41 crc kubenswrapper[4851]: I0223 13:26:41.926890 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:26:41 crc kubenswrapper[4851]: I0223 13:26:41.927631 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60927ae79050568035bcfc1c3f4be4f3b0b6745f639bddbc9d3c155365093c4b"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:26:41 crc kubenswrapper[4851]: I0223 13:26:41.927695 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://60927ae79050568035bcfc1c3f4be4f3b0b6745f639bddbc9d3c155365093c4b" gracePeriod=600 Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.178839 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.273832 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nczd5\" (UniqueName: \"kubernetes.io/projected/8776d848-0f18-44d4-9eaf-3108ca8a79bd-kube-api-access-nczd5\") pod \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.273898 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-config-data\") pod \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.274046 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-combined-ca-bundle\") pod \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\" (UID: \"8776d848-0f18-44d4-9eaf-3108ca8a79bd\") " Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.291753 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8776d848-0f18-44d4-9eaf-3108ca8a79bd-kube-api-access-nczd5" (OuterVolumeSpecName: "kube-api-access-nczd5") pod "8776d848-0f18-44d4-9eaf-3108ca8a79bd" (UID: "8776d848-0f18-44d4-9eaf-3108ca8a79bd"). InnerVolumeSpecName "kube-api-access-nczd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.305826 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8776d848-0f18-44d4-9eaf-3108ca8a79bd" (UID: "8776d848-0f18-44d4-9eaf-3108ca8a79bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.314823 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-config-data" (OuterVolumeSpecName: "config-data") pod "8776d848-0f18-44d4-9eaf-3108ca8a79bd" (UID: "8776d848-0f18-44d4-9eaf-3108ca8a79bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.375990 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.376035 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nczd5\" (UniqueName: \"kubernetes.io/projected/8776d848-0f18-44d4-9eaf-3108ca8a79bd-kube-api-access-nczd5\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.376048 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8776d848-0f18-44d4-9eaf-3108ca8a79bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.834106 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="60927ae79050568035bcfc1c3f4be4f3b0b6745f639bddbc9d3c155365093c4b" exitCode=0 Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.834193 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"60927ae79050568035bcfc1c3f4be4f3b0b6745f639bddbc9d3c155365093c4b"} Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.834221 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"58ac070e07fd5f5e92265b5996711448defe16f94a724c465cc2214cdff34234"} Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.834304 4851 scope.go:117] "RemoveContainer" containerID="1b56c77e1de63323e9342dc73ec952ed4e450a54675ac5d33629ae895364039c" Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.835820 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ht4d4" event={"ID":"8776d848-0f18-44d4-9eaf-3108ca8a79bd","Type":"ContainerDied","Data":"01da2bfdb9137c06ca5800e38c0eb7e3ab68508d4a8f7e93f0268bed9a72608b"} Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.835842 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01da2bfdb9137c06ca5800e38c0eb7e3ab68508d4a8f7e93f0268bed9a72608b" Feb 23 13:26:42 crc kubenswrapper[4851]: I0223 13:26:42.835895 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ht4d4" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.135342 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jdl7s"] Feb 23 13:26:43 crc kubenswrapper[4851]: E0223 13:26:43.135965 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8776d848-0f18-44d4-9eaf-3108ca8a79bd" containerName="keystone-db-sync" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.135981 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8776d848-0f18-44d4-9eaf-3108ca8a79bd" containerName="keystone-db-sync" Feb 23 13:26:43 crc kubenswrapper[4851]: E0223 13:26:43.135997 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c03be1e-abc1-4289-b6dc-ba4b3ac70614" containerName="dnsmasq-dns" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136003 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c03be1e-abc1-4289-b6dc-ba4b3ac70614" containerName="dnsmasq-dns" Feb 23 13:26:43 crc kubenswrapper[4851]: E0223 13:26:43.136037 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88533393-d631-48b0-b09f-883391965b09" containerName="mariadb-account-create-update" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136045 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="88533393-d631-48b0-b09f-883391965b09" containerName="mariadb-account-create-update" Feb 23 13:26:43 crc kubenswrapper[4851]: E0223 13:26:43.136059 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ef6fd7-8171-4e6b-9cda-5b8610248ca2" containerName="mariadb-account-create-update" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136067 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ef6fd7-8171-4e6b-9cda-5b8610248ca2" containerName="mariadb-account-create-update" Feb 23 13:26:43 crc kubenswrapper[4851]: E0223 13:26:43.136085 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c03be1e-abc1-4289-b6dc-ba4b3ac70614" containerName="init" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136092 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c03be1e-abc1-4289-b6dc-ba4b3ac70614" containerName="init" Feb 23 13:26:43 crc kubenswrapper[4851]: E0223 13:26:43.136101 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae4a32c-745e-4a9e-a3ca-226b8890d6ad" containerName="mariadb-account-create-update" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136107 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae4a32c-745e-4a9e-a3ca-226b8890d6ad" containerName="mariadb-account-create-update" Feb 23 13:26:43 crc kubenswrapper[4851]: E0223 13:26:43.136134 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e353fec8-4196-4baa-8f02-878651e9bcc5" containerName="mariadb-database-create" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136141 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e353fec8-4196-4baa-8f02-878651e9bcc5" containerName="mariadb-database-create" Feb 23 13:26:43 crc kubenswrapper[4851]: E0223 13:26:43.136158 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4" containerName="mariadb-database-create" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136167 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4" containerName="mariadb-database-create" Feb 23 13:26:43 crc kubenswrapper[4851]: E0223 13:26:43.136179 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec82359-d307-4ee4-8dc7-a9db6d393244" containerName="mariadb-database-create" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136185 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec82359-d307-4ee4-8dc7-a9db6d393244" containerName="mariadb-database-create" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136356 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ef6fd7-8171-4e6b-9cda-5b8610248ca2" containerName="mariadb-account-create-update" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136374 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="e353fec8-4196-4baa-8f02-878651e9bcc5" containerName="mariadb-database-create" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136382 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4" containerName="mariadb-database-create" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136390 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec82359-d307-4ee4-8dc7-a9db6d393244" containerName="mariadb-database-create" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136401 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8776d848-0f18-44d4-9eaf-3108ca8a79bd" containerName="keystone-db-sync" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136410 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae4a32c-745e-4a9e-a3ca-226b8890d6ad" containerName="mariadb-account-create-update" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136418 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="88533393-d631-48b0-b09f-883391965b09" containerName="mariadb-account-create-update" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136424 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c03be1e-abc1-4289-b6dc-ba4b3ac70614" containerName="dnsmasq-dns" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.136898 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.142220 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.142346 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.142466 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.142970 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4gbxh" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.145561 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.149793 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jdl7s"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.163652 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fr5hd"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.165867 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.231029 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g4jb\" (UniqueName: \"kubernetes.io/projected/412b1fad-6874-4067-b8a4-4209d45cf67e-kube-api-access-5g4jb\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.238739 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r68w7\" (UniqueName: \"kubernetes.io/projected/5d1392f1-78c4-4a65-98f9-140ac98cb262-kube-api-access-r68w7\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239062 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239198 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-combined-ca-bundle\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239448 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-scripts\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239542 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-svc\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239743 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-config-data\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239850 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-fernet-keys\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239894 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-config\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239928 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239952 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.239973 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-credential-keys\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.257386 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fr5hd"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.311605 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5554597f7c-7r294"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.313301 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.321139 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.321582 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.321584 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-vfltv" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.324722 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.340529 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5554597f7c-7r294"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.342789 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-combined-ca-bundle\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.342958 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-scripts\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.343042 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-svc\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.343136 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-config-data\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.343232 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-fernet-keys\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.343299 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-config\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.343394 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.343473 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.343543 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-credential-keys\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.343623 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g4jb\" (UniqueName: \"kubernetes.io/projected/412b1fad-6874-4067-b8a4-4209d45cf67e-kube-api-access-5g4jb\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.343691 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r68w7\" (UniqueName: \"kubernetes.io/projected/5d1392f1-78c4-4a65-98f9-140ac98cb262-kube-api-access-r68w7\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.354863 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.345521 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-svc\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.349428 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.349932 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.353249 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-scripts\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.353751 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-fernet-keys\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.344922 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-config\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.356313 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.359313 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-combined-ca-bundle\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.364793 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-credential-keys\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.365304 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-config-data\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.382866 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r68w7\" (UniqueName: \"kubernetes.io/projected/5d1392f1-78c4-4a65-98f9-140ac98cb262-kube-api-access-r68w7\") pod \"keystone-bootstrap-jdl7s\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.423262 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g4jb\" (UniqueName: \"kubernetes.io/projected/412b1fad-6874-4067-b8a4-4209d45cf67e-kube-api-access-5g4jb\") pod \"dnsmasq-dns-847c4cc679-fr5hd\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.438405 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-24vww"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.439453 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.453067 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4r9pp" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.453245 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.453355 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.453728 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.464707 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81279875-8a74-47bf-900a-dcf56249c95b-horizon-secret-key\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.464784 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm76j\" (UniqueName: \"kubernetes.io/projected/81279875-8a74-47bf-900a-dcf56249c95b-kube-api-access-wm76j\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.464810 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81279875-8a74-47bf-900a-dcf56249c95b-logs\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.464831 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-scripts\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.464846 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-config-data\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.467517 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.469520 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.486174 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.486393 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.488940 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-24vww"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.502381 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.526861 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.566980 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81279875-8a74-47bf-900a-dcf56249c95b-horizon-secret-key\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567034 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567060 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-scripts\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567078 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-log-httpd\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567098 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-combined-ca-bundle\") pod \"neutron-db-sync-24vww\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567136 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm76j\" (UniqueName: \"kubernetes.io/projected/81279875-8a74-47bf-900a-dcf56249c95b-kube-api-access-wm76j\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567158 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-config-data\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567174 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-run-httpd\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567192 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l872n\" (UniqueName: \"kubernetes.io/projected/322bc2f6-b9c6-4769-bc8c-fa7974459069-kube-api-access-l872n\") pod \"neutron-db-sync-24vww\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567209 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81279875-8a74-47bf-900a-dcf56249c95b-logs\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567227 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-config\") pod \"neutron-db-sync-24vww\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567246 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-scripts\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567262 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59zc\" (UniqueName: \"kubernetes.io/projected/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-kube-api-access-j59zc\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567275 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-config-data\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.567314 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.568693 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81279875-8a74-47bf-900a-dcf56249c95b-logs\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.569809 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-scripts\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.570553 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-config-data\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.571219 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81279875-8a74-47bf-900a-dcf56249c95b-horizon-secret-key\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.589405 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.603979 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.605807 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fr5hd"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.616314 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-69264" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.616736 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.616863 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.617715 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.628907 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm76j\" (UniqueName: \"kubernetes.io/projected/81279875-8a74-47bf-900a-dcf56249c95b-kube-api-access-wm76j\") pod \"horizon-5554597f7c-7r294\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.635926 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669730 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669802 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669831 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-scripts\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669858 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-log-httpd\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669881 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-combined-ca-bundle\") pod \"neutron-db-sync-24vww\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669922 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-config-data\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669938 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-run-httpd\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669955 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l872n\" (UniqueName: \"kubernetes.io/projected/322bc2f6-b9c6-4769-bc8c-fa7974459069-kube-api-access-l872n\") pod \"neutron-db-sync-24vww\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669976 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-config\") pod \"neutron-db-sync-24vww\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.669992 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j59zc\" (UniqueName: \"kubernetes.io/projected/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-kube-api-access-j59zc\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.670992 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-run-httpd\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.672393 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.673812 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-log-httpd\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.687876 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-config\") pod \"neutron-db-sync-24vww\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.694016 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8l7sd"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.697056 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.697877 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.701313 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.703574 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rw7s7" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.703763 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.722539 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l872n\" (UniqueName: \"kubernetes.io/projected/322bc2f6-b9c6-4769-bc8c-fa7974459069-kube-api-access-l872n\") pod \"neutron-db-sync-24vww\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.723700 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-combined-ca-bundle\") pod \"neutron-db-sync-24vww\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.724624 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-config-data\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.735409 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-scripts\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.736060 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.737081 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j59zc\" (UniqueName: \"kubernetes.io/projected/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-kube-api-access-j59zc\") pod \"ceilometer-0\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772133 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20225786-c4f3-48e3-8719-d0710aeb3655-logs\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772200 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772223 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-combined-ca-bundle\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772271 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772300 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hftbc\" (UniqueName: \"kubernetes.io/projected/20225786-c4f3-48e3-8719-d0710aeb3655-kube-api-access-hftbc\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772315 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772351 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-scripts\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772379 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772427 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772445 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwb6b\" (UniqueName: \"kubernetes.io/projected/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-kube-api-access-jwb6b\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772468 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-logs\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772501 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-config-data\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.772518 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.778429 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z2cqf"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.780131 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.856303 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8l7sd"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.864214 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-24vww" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873430 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873470 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwb6b\" (UniqueName: \"kubernetes.io/projected/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-kube-api-access-jwb6b\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873498 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-logs\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873526 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873544 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873577 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-config-data\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873595 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6q4\" (UniqueName: \"kubernetes.io/projected/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-kube-api-access-cm6q4\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873610 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873629 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20225786-c4f3-48e3-8719-d0710aeb3655-logs\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873647 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873661 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-combined-ca-bundle\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873686 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-config\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873718 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873743 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hftbc\" (UniqueName: \"kubernetes.io/projected/20225786-c4f3-48e3-8719-d0710aeb3655-kube-api-access-hftbc\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873760 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873786 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-scripts\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873814 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873846 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.873869 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.874728 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.878121 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-logs\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.880793 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.881937 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20225786-c4f3-48e3-8719-d0710aeb3655-logs\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.888144 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-config-data\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.891411 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.897365 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.905491 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.909167 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-scripts\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.910280 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.910690 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-combined-ca-bundle\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.915399 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.917916 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hftbc\" (UniqueName: \"kubernetes.io/projected/20225786-c4f3-48e3-8719-d0710aeb3655-kube-api-access-hftbc\") pod \"placement-db-sync-8l7sd\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.925188 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwb6b\" (UniqueName: \"kubernetes.io/projected/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-kube-api-access-jwb6b\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.940438 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z2cqf"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.952473 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-785dd4679c-lrw27"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.954083 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.961663 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.963179 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.965976 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.966461 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.970995 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.979940 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.983895 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.984305 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6q4\" (UniqueName: \"kubernetes.io/projected/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-kube-api-access-cm6q4\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.984651 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-config\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.985150 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.985782 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.982201 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.990966 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.992084 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-config\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.992746 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:43 crc kubenswrapper[4851]: I0223 13:26:43.992747 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.025400 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-785dd4679c-lrw27"] Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.068751 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8l7sd" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.091186 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-466sr"] Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.092230 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6q4\" (UniqueName: \"kubernetes.io/projected/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-kube-api-access-cm6q4\") pod \"dnsmasq-dns-785d8bcb8c-z2cqf\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.092542 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.102886 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.103341 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pr2tq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.138305 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.147183 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.196898 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197227 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197258 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197279 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197297 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-scripts\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197324 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-config-data\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197364 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxwvw\" (UniqueName: \"kubernetes.io/projected/d02e395a-d7a9-4603-be40-0743b00c9cbd-kube-api-access-rxwvw\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197406 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-logs\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197426 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-logs\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197454 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-horizon-secret-key\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197493 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rrnv\" (UniqueName: \"kubernetes.io/projected/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-kube-api-access-9rrnv\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197513 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.197535 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.203448 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-466sr"] Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.248523 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-n6qtq"] Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.249679 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.255822 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.256023 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wp2ng" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.256514 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.262794 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n6qtq"] Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.271417 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299364 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-config-data\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299420 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxwvw\" (UniqueName: \"kubernetes.io/projected/d02e395a-d7a9-4603-be40-0743b00c9cbd-kube-api-access-rxwvw\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299465 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-logs\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299487 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-logs\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299513 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-horizon-secret-key\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299549 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rrnv\" (UniqueName: \"kubernetes.io/projected/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-kube-api-access-9rrnv\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299570 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299593 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-combined-ca-bundle\") pod \"barbican-db-sync-466sr\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299619 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299642 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299662 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-db-sync-config-data\") pod \"barbican-db-sync-466sr\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299682 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299700 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htgg8\" (UniqueName: \"kubernetes.io/projected/23eaed53-2c8f-46ae-bc53-87ab7855282a-kube-api-access-htgg8\") pod \"barbican-db-sync-466sr\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299719 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299739 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.299757 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-scripts\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.300642 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-scripts\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.301626 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.305665 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.301999 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.302416 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-logs\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.302987 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-logs\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.301690 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-config-data\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.313510 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-horizon-secret-key\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.315486 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.317211 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.320234 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.333500 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rrnv\" (UniqueName: \"kubernetes.io/projected/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-kube-api-access-9rrnv\") pod \"horizon-785dd4679c-lrw27\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.339438 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxwvw\" (UniqueName: \"kubernetes.io/projected/d02e395a-d7a9-4603-be40-0743b00c9cbd-kube-api-access-rxwvw\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.388430 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.401289 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-scripts\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.401377 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-config-data\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.401414 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-combined-ca-bundle\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.401450 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-combined-ca-bundle\") pod \"barbican-db-sync-466sr\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.401474 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-etc-machine-id\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.401515 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-db-sync-config-data\") pod \"barbican-db-sync-466sr\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.401547 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htgg8\" (UniqueName: \"kubernetes.io/projected/23eaed53-2c8f-46ae-bc53-87ab7855282a-kube-api-access-htgg8\") pod \"barbican-db-sync-466sr\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.401588 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-db-sync-config-data\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.401630 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74xjm\" (UniqueName: \"kubernetes.io/projected/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-kube-api-access-74xjm\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.406292 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-combined-ca-bundle\") pod \"barbican-db-sync-466sr\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.406376 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-db-sync-config-data\") pod \"barbican-db-sync-466sr\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.417858 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htgg8\" (UniqueName: \"kubernetes.io/projected/23eaed53-2c8f-46ae-bc53-87ab7855282a-kube-api-access-htgg8\") pod \"barbican-db-sync-466sr\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: W0223 13:26:44.488052 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d1392f1_78c4_4a65_98f9_140ac98cb262.slice/crio-5a30f9d30ffc4f0923ba478e44647633a17b920ecb0b34325be06f878835e1c0 WatchSource:0}: Error finding container 5a30f9d30ffc4f0923ba478e44647633a17b920ecb0b34325be06f878835e1c0: Status 404 returned error can't find the container with id 5a30f9d30ffc4f0923ba478e44647633a17b920ecb0b34325be06f878835e1c0 Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.492028 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jdl7s"] Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.502850 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5554597f7c-7r294"] Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.503548 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-config-data\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.503687 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-combined-ca-bundle\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.503722 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-etc-machine-id\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.503782 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-db-sync-config-data\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.503812 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74xjm\" (UniqueName: \"kubernetes.io/projected/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-kube-api-access-74xjm\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.503867 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-scripts\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.506274 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-etc-machine-id\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: W0223 13:26:44.512086 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81279875_8a74_47bf_900a_dcf56249c95b.slice/crio-a0dc766ce21a89d4b67d394e99458857ceccecf191e6c4bb24e45e3a54dd49aa WatchSource:0}: Error finding container a0dc766ce21a89d4b67d394e99458857ceccecf191e6c4bb24e45e3a54dd49aa: Status 404 returned error can't find the container with id a0dc766ce21a89d4b67d394e99458857ceccecf191e6c4bb24e45e3a54dd49aa Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.512849 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-scripts\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.520840 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-db-sync-config-data\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.521954 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-config-data\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.523235 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-combined-ca-bundle\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.534497 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74xjm\" (UniqueName: \"kubernetes.io/projected/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-kube-api-access-74xjm\") pod \"cinder-db-sync-n6qtq\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.541318 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-466sr" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.581085 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.581971 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.599387 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.898078 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5554597f7c-7r294" event={"ID":"81279875-8a74-47bf-900a-dcf56249c95b","Type":"ContainerStarted","Data":"a0dc766ce21a89d4b67d394e99458857ceccecf191e6c4bb24e45e3a54dd49aa"} Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.899080 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdl7s" event={"ID":"5d1392f1-78c4-4a65-98f9-140ac98cb262","Type":"ContainerStarted","Data":"f07d880cf8324b74c2c29a25e4f5847b72eaa950dac42f93412f72b9ff8aaf47"} Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.899106 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdl7s" event={"ID":"5d1392f1-78c4-4a65-98f9-140ac98cb262","Type":"ContainerStarted","Data":"5a30f9d30ffc4f0923ba478e44647633a17b920ecb0b34325be06f878835e1c0"} Feb 23 13:26:44 crc kubenswrapper[4851]: I0223 13:26:44.929355 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jdl7s" podStartSLOduration=1.929323375 podStartE2EDuration="1.929323375s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:26:44.928975695 +0000 UTC m=+1159.610679393" watchObservedRunningTime="2026-02-23 13:26:44.929323375 +0000 UTC m=+1159.611027043" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.015965 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8l7sd"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.038560 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.081007 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fr5hd"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.086301 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z2cqf"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.096098 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-24vww"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.205353 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-466sr"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.369275 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:26:45 crc kubenswrapper[4851]: W0223 13:26:45.387521 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2637cef_c7f1_4dc7_9f61_8f2f2287d92c.slice/crio-dac8a145194065e6736bd7a06b38f914cebd7d5367cd6d5417cc7aafded64a31 WatchSource:0}: Error finding container dac8a145194065e6736bd7a06b38f914cebd7d5367cd6d5417cc7aafded64a31: Status 404 returned error can't find the container with id dac8a145194065e6736bd7a06b38f914cebd7d5367cd6d5417cc7aafded64a31 Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.429844 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n6qtq"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.438366 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-785dd4679c-lrw27"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.475806 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.603534 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5554597f7c-7r294"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.661386 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.681073 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.702012 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77dc8cd779-8bfdj"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.708798 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.710173 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77dc8cd779-8bfdj"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.716799 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.847931 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-config-data\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.847989 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2b7m\" (UniqueName: \"kubernetes.io/projected/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-kube-api-access-h2b7m\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.848143 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-scripts\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.848187 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-horizon-secret-key\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.848320 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-logs\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.915859 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n6qtq" event={"ID":"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c","Type":"ContainerStarted","Data":"200467c5dd0830690f8ac27bc980e33a004e3e355d1c6eb65a3a433d167ed0c0"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.918427 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c","Type":"ContainerStarted","Data":"dac8a145194065e6736bd7a06b38f914cebd7d5367cd6d5417cc7aafded64a31"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.922156 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d02e395a-d7a9-4603-be40-0743b00c9cbd","Type":"ContainerStarted","Data":"a17a2ece65c532f488740aa5027376a47a099794f532b9f093edcf67371b9cf2"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.925029 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-24vww" event={"ID":"322bc2f6-b9c6-4769-bc8c-fa7974459069","Type":"ContainerStarted","Data":"5393f8055167ffe99ba5e48976a116e48c79832c72e4bfc51819f2c0d22161ea"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.925060 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-24vww" event={"ID":"322bc2f6-b9c6-4769-bc8c-fa7974459069","Type":"ContainerStarted","Data":"a7a6671b1d161befe2a3ecf64877c47271649a5a642a176ef47d02e200a02722"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.926506 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-785dd4679c-lrw27" event={"ID":"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4","Type":"ContainerStarted","Data":"4d62fd4c580b9177b3ae79b3257c395214b6b986984bda3d6a9849968d3fe266"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.927801 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-466sr" event={"ID":"23eaed53-2c8f-46ae-bc53-87ab7855282a","Type":"ContainerStarted","Data":"6c026c4a8c7a4c6b77862b621bd5c9d4e6c9a32ec933e328c616284f2ef3fc88"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.929431 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8l7sd" event={"ID":"20225786-c4f3-48e3-8719-d0710aeb3655","Type":"ContainerStarted","Data":"f4ff11f3921b6e8edcae1cdb082cae575047368e7b3798f81fd9ddea27ffe381"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.942390 4851 generic.go:334] "Generic (PLEG): container finished" podID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" containerID="f57844956c08dcd3e22c2fe98af08947b70d1ec43e0a8a04b8bb21f1f4cc6d5f" exitCode=0 Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.942476 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" event={"ID":"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed","Type":"ContainerDied","Data":"f57844956c08dcd3e22c2fe98af08947b70d1ec43e0a8a04b8bb21f1f4cc6d5f"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.942518 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" event={"ID":"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed","Type":"ContainerStarted","Data":"4d6707050a1df4aa024d30db1e9cad10c0aa304905ae3106e5d8cb22233fa30c"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.942644 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-24vww" podStartSLOduration=2.9426293340000003 podStartE2EDuration="2.942629334s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:26:45.936918763 +0000 UTC m=+1160.618622441" watchObservedRunningTime="2026-02-23 13:26:45.942629334 +0000 UTC m=+1160.624333012" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.947507 4851 generic.go:334] "Generic (PLEG): container finished" podID="412b1fad-6874-4067-b8a4-4209d45cf67e" containerID="203b91137dab9eac1330ebcbf0a0ddd410a3e950d223d437029e838168cba0b5" exitCode=0 Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.947568 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" event={"ID":"412b1fad-6874-4067-b8a4-4209d45cf67e","Type":"ContainerDied","Data":"203b91137dab9eac1330ebcbf0a0ddd410a3e950d223d437029e838168cba0b5"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.947594 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" event={"ID":"412b1fad-6874-4067-b8a4-4209d45cf67e","Type":"ContainerStarted","Data":"7f3350617cbaf90f6fa0ff5fcbeed3396330c4e569aa9e846939d5692ce7806c"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.950489 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-config-data\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.950534 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2b7m\" (UniqueName: \"kubernetes.io/projected/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-kube-api-access-h2b7m\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.950571 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-scripts\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.950602 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-horizon-secret-key\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.950743 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-logs\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.951462 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-logs\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.952583 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-scripts\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.952728 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerStarted","Data":"20fdeb0c21c84c9ccb37c20cb5d82bb2cf8ab5bd210bf1302d210369879bce92"} Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.954192 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-config-data\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.960036 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-horizon-secret-key\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:45 crc kubenswrapper[4851]: I0223 13:26:45.976845 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2b7m\" (UniqueName: \"kubernetes.io/projected/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-kube-api-access-h2b7m\") pod \"horizon-77dc8cd779-8bfdj\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.059271 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.427561 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.561478 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-config\") pod \"412b1fad-6874-4067-b8a4-4209d45cf67e\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.561652 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g4jb\" (UniqueName: \"kubernetes.io/projected/412b1fad-6874-4067-b8a4-4209d45cf67e-kube-api-access-5g4jb\") pod \"412b1fad-6874-4067-b8a4-4209d45cf67e\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.561742 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-nb\") pod \"412b1fad-6874-4067-b8a4-4209d45cf67e\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.561814 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-swift-storage-0\") pod \"412b1fad-6874-4067-b8a4-4209d45cf67e\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.561885 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-svc\") pod \"412b1fad-6874-4067-b8a4-4209d45cf67e\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.561917 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-sb\") pod \"412b1fad-6874-4067-b8a4-4209d45cf67e\" (UID: \"412b1fad-6874-4067-b8a4-4209d45cf67e\") " Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.567231 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/412b1fad-6874-4067-b8a4-4209d45cf67e-kube-api-access-5g4jb" (OuterVolumeSpecName: "kube-api-access-5g4jb") pod "412b1fad-6874-4067-b8a4-4209d45cf67e" (UID: "412b1fad-6874-4067-b8a4-4209d45cf67e"). InnerVolumeSpecName "kube-api-access-5g4jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.588988 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-config" (OuterVolumeSpecName: "config") pod "412b1fad-6874-4067-b8a4-4209d45cf67e" (UID: "412b1fad-6874-4067-b8a4-4209d45cf67e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.591846 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "412b1fad-6874-4067-b8a4-4209d45cf67e" (UID: "412b1fad-6874-4067-b8a4-4209d45cf67e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.592317 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "412b1fad-6874-4067-b8a4-4209d45cf67e" (UID: "412b1fad-6874-4067-b8a4-4209d45cf67e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.592512 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "412b1fad-6874-4067-b8a4-4209d45cf67e" (UID: "412b1fad-6874-4067-b8a4-4209d45cf67e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.602505 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "412b1fad-6874-4067-b8a4-4209d45cf67e" (UID: "412b1fad-6874-4067-b8a4-4209d45cf67e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.664196 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.664231 4851 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.664241 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.664250 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.664259 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412b1fad-6874-4067-b8a4-4209d45cf67e-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.664269 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g4jb\" (UniqueName: \"kubernetes.io/projected/412b1fad-6874-4067-b8a4-4209d45cf67e-kube-api-access-5g4jb\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.779682 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77dc8cd779-8bfdj"] Feb 23 13:26:46 crc kubenswrapper[4851]: W0223 13:26:46.783014 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod990d8cf1_3b35_43b4_ba97_ed3b73f92ba3.slice/crio-51a50302cc1ca89325a046f7814873339fe921efbcbaf6451180a5ae6c431b5a WatchSource:0}: Error finding container 51a50302cc1ca89325a046f7814873339fe921efbcbaf6451180a5ae6c431b5a: Status 404 returned error can't find the container with id 51a50302cc1ca89325a046f7814873339fe921efbcbaf6451180a5ae6c431b5a Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.975077 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" event={"ID":"412b1fad-6874-4067-b8a4-4209d45cf67e","Type":"ContainerDied","Data":"7f3350617cbaf90f6fa0ff5fcbeed3396330c4e569aa9e846939d5692ce7806c"} Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.975127 4851 scope.go:117] "RemoveContainer" containerID="203b91137dab9eac1330ebcbf0a0ddd410a3e950d223d437029e838168cba0b5" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.975225 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-fr5hd" Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.986960 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d02e395a-d7a9-4603-be40-0743b00c9cbd","Type":"ContainerStarted","Data":"f98fa9387243f03027cca3b74fe55b0df752356d075b3446d58cd84a11bf66b2"} Feb 23 13:26:46 crc kubenswrapper[4851]: I0223 13:26:46.991597 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77dc8cd779-8bfdj" event={"ID":"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3","Type":"ContainerStarted","Data":"51a50302cc1ca89325a046f7814873339fe921efbcbaf6451180a5ae6c431b5a"} Feb 23 13:26:47 crc kubenswrapper[4851]: I0223 13:26:47.024264 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" event={"ID":"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed","Type":"ContainerStarted","Data":"84769ef9511ed5a14d6f39adaace7b8cc5413fd3576a0e21e216d71f3cfefd1f"} Feb 23 13:26:47 crc kubenswrapper[4851]: I0223 13:26:47.024480 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:47 crc kubenswrapper[4851]: I0223 13:26:47.040101 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c","Type":"ContainerStarted","Data":"575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a"} Feb 23 13:26:47 crc kubenswrapper[4851]: I0223 13:26:47.066657 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fr5hd"] Feb 23 13:26:47 crc kubenswrapper[4851]: I0223 13:26:47.107405 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-fr5hd"] Feb 23 13:26:47 crc kubenswrapper[4851]: I0223 13:26:47.113197 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" podStartSLOduration=4.113178343 podStartE2EDuration="4.113178343s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:26:47.042625367 +0000 UTC m=+1161.724329065" watchObservedRunningTime="2026-02-23 13:26:47.113178343 +0000 UTC m=+1161.794882021" Feb 23 13:26:47 crc kubenswrapper[4851]: I0223 13:26:47.993693 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="412b1fad-6874-4067-b8a4-4209d45cf67e" path="/var/lib/kubelet/pods/412b1fad-6874-4067-b8a4-4209d45cf67e/volumes" Feb 23 13:26:48 crc kubenswrapper[4851]: I0223 13:26:48.073396 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c","Type":"ContainerStarted","Data":"fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97"} Feb 23 13:26:48 crc kubenswrapper[4851]: I0223 13:26:48.073601 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerName="glance-log" containerID="cri-o://575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a" gracePeriod=30 Feb 23 13:26:48 crc kubenswrapper[4851]: I0223 13:26:48.074565 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerName="glance-httpd" containerID="cri-o://fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97" gracePeriod=30 Feb 23 13:26:48 crc kubenswrapper[4851]: I0223 13:26:48.094205 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d02e395a-d7a9-4603-be40-0743b00c9cbd","Type":"ContainerStarted","Data":"6491fe0823a1060ae7c3eb6119b2d94e07ac56c145642b624956b8445d8df23d"} Feb 23 13:26:48 crc kubenswrapper[4851]: I0223 13:26:48.108639 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.108623647 podStartE2EDuration="5.108623647s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:26:48.100289852 +0000 UTC m=+1162.781993540" watchObservedRunningTime="2026-02-23 13:26:48.108623647 +0000 UTC m=+1162.790327325" Feb 23 13:26:48 crc kubenswrapper[4851]: I0223 13:26:48.908486 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.033641 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-scripts\") pod \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.033972 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-combined-ca-bundle\") pod \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.034012 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-config-data\") pod \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.034119 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwb6b\" (UniqueName: \"kubernetes.io/projected/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-kube-api-access-jwb6b\") pod \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.034569 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-logs\") pod \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.034711 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-httpd-run\") pod \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.034737 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-public-tls-certs\") pod \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.034776 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\" (UID: \"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c\") " Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.035012 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-logs" (OuterVolumeSpecName: "logs") pod "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" (UID: "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.035238 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.036399 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" (UID: "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.040363 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-kube-api-access-jwb6b" (OuterVolumeSpecName: "kube-api-access-jwb6b") pod "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" (UID: "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c"). InnerVolumeSpecName "kube-api-access-jwb6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.049622 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-scripts" (OuterVolumeSpecName: "scripts") pod "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" (UID: "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.053608 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" (UID: "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.064438 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" (UID: "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.088196 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" (UID: "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.136636 4851 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.136667 4851 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.136707 4851 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.136717 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.136726 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.136735 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwb6b\" (UniqueName: \"kubernetes.io/projected/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-kube-api-access-jwb6b\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.169617 4851 generic.go:334] "Generic (PLEG): container finished" podID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerID="fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97" exitCode=0 Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.169648 4851 generic.go:334] "Generic (PLEG): container finished" podID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerID="575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a" exitCode=143 Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.169749 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c","Type":"ContainerDied","Data":"fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97"} Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.169812 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c","Type":"ContainerDied","Data":"575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a"} Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.169826 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2637cef-c7f1-4dc7-9f61-8f2f2287d92c","Type":"ContainerDied","Data":"dac8a145194065e6736bd7a06b38f914cebd7d5367cd6d5417cc7aafded64a31"} Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.169847 4851 scope.go:117] "RemoveContainer" containerID="fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.169774 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerName="glance-log" containerID="cri-o://f98fa9387243f03027cca3b74fe55b0df752356d075b3446d58cd84a11bf66b2" gracePeriod=30 Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.170108 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.170180 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerName="glance-httpd" containerID="cri-o://6491fe0823a1060ae7c3eb6119b2d94e07ac56c145642b624956b8445d8df23d" gracePeriod=30 Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.181690 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-config-data" (OuterVolumeSpecName: "config-data") pod "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" (UID: "f2637cef-c7f1-4dc7-9f61-8f2f2287d92c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.198981 4851 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.215772 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.215754031 podStartE2EDuration="6.215754031s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:26:49.201215139 +0000 UTC m=+1163.882918817" watchObservedRunningTime="2026-02-23 13:26:49.215754031 +0000 UTC m=+1163.897457709" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.240513 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.240547 4851 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.559858 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.565394 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.585779 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:26:49 crc kubenswrapper[4851]: E0223 13:26:49.586102 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="412b1fad-6874-4067-b8a4-4209d45cf67e" containerName="init" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.586117 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="412b1fad-6874-4067-b8a4-4209d45cf67e" containerName="init" Feb 23 13:26:49 crc kubenswrapper[4851]: E0223 13:26:49.586137 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerName="glance-httpd" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.586144 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerName="glance-httpd" Feb 23 13:26:49 crc kubenswrapper[4851]: E0223 13:26:49.586161 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerName="glance-log" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.586167 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerName="glance-log" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.586313 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="412b1fad-6874-4067-b8a4-4209d45cf67e" containerName="init" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.586341 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerName="glance-httpd" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.586359 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" containerName="glance-log" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.587286 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.598064 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.611542 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.616581 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.650951 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-config-data\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.651017 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.651070 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-logs\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.651099 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.651161 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptfm9\" (UniqueName: \"kubernetes.io/projected/144b9733-ebbc-4841-99bc-5629575dbed3-kube-api-access-ptfm9\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.651184 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.651219 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.651258 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-scripts\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: E0223 13:26:49.723881 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd02e395a_d7a9_4603_be40_0743b00c9cbd.slice/crio-conmon-6491fe0823a1060ae7c3eb6119b2d94e07ac56c145642b624956b8445d8df23d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2637cef_c7f1_4dc7_9f61_8f2f2287d92c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d1392f1_78c4_4a65_98f9_140ac98cb262.slice/crio-f07d880cf8324b74c2c29a25e4f5847b72eaa950dac42f93412f72b9ff8aaf47.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.753514 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.753941 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptfm9\" (UniqueName: \"kubernetes.io/projected/144b9733-ebbc-4841-99bc-5629575dbed3-kube-api-access-ptfm9\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.753968 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.754004 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.754046 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-scripts\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.754065 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-config-data\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.754098 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.754565 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-logs\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.755304 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.755341 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.755681 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-logs\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.757946 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.759066 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-scripts\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.759870 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-config-data\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.760430 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.772531 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptfm9\" (UniqueName: \"kubernetes.io/projected/144b9733-ebbc-4841-99bc-5629575dbed3-kube-api-access-ptfm9\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.802230 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.913042 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:26:49 crc kubenswrapper[4851]: I0223 13:26:49.999291 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2637cef-c7f1-4dc7-9f61-8f2f2287d92c" path="/var/lib/kubelet/pods/f2637cef-c7f1-4dc7-9f61-8f2f2287d92c/volumes" Feb 23 13:26:50 crc kubenswrapper[4851]: I0223 13:26:50.192726 4851 generic.go:334] "Generic (PLEG): container finished" podID="5d1392f1-78c4-4a65-98f9-140ac98cb262" containerID="f07d880cf8324b74c2c29a25e4f5847b72eaa950dac42f93412f72b9ff8aaf47" exitCode=0 Feb 23 13:26:50 crc kubenswrapper[4851]: I0223 13:26:50.192798 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdl7s" event={"ID":"5d1392f1-78c4-4a65-98f9-140ac98cb262","Type":"ContainerDied","Data":"f07d880cf8324b74c2c29a25e4f5847b72eaa950dac42f93412f72b9ff8aaf47"} Feb 23 13:26:50 crc kubenswrapper[4851]: I0223 13:26:50.199029 4851 generic.go:334] "Generic (PLEG): container finished" podID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerID="6491fe0823a1060ae7c3eb6119b2d94e07ac56c145642b624956b8445d8df23d" exitCode=0 Feb 23 13:26:50 crc kubenswrapper[4851]: I0223 13:26:50.199057 4851 generic.go:334] "Generic (PLEG): container finished" podID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerID="f98fa9387243f03027cca3b74fe55b0df752356d075b3446d58cd84a11bf66b2" exitCode=143 Feb 23 13:26:50 crc kubenswrapper[4851]: I0223 13:26:50.199085 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d02e395a-d7a9-4603-be40-0743b00c9cbd","Type":"ContainerDied","Data":"6491fe0823a1060ae7c3eb6119b2d94e07ac56c145642b624956b8445d8df23d"} Feb 23 13:26:50 crc kubenswrapper[4851]: I0223 13:26:50.199154 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d02e395a-d7a9-4603-be40-0743b00c9cbd","Type":"ContainerDied","Data":"f98fa9387243f03027cca3b74fe55b0df752356d075b3446d58cd84a11bf66b2"} Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.761415 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-785dd4679c-lrw27"] Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.819484 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69f9fbd4d-lldd8"] Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.821173 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.824249 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.835064 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.854408 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69f9fbd4d-lldd8"] Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.883717 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77dc8cd779-8bfdj"] Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.893242 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-logs\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.893293 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-config-data\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.893362 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-tls-certs\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.893413 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-scripts\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.893451 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-secret-key\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.893488 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-combined-ca-bundle\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.893544 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdjg6\" (UniqueName: \"kubernetes.io/projected/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-kube-api-access-pdjg6\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.907565 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64f4c4f478-f578z"] Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.911751 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.918128 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64f4c4f478-f578z"] Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996069 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-logs\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996112 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-config-data\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996166 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c52d079-d9d5-469e-9319-08266bea1f82-combined-ca-bundle\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996187 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-tls-certs\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996213 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c52d079-d9d5-469e-9319-08266bea1f82-logs\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996236 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-scripts\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996254 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c52d079-d9d5-469e-9319-08266bea1f82-scripts\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996271 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c52d079-d9d5-469e-9319-08266bea1f82-config-data\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996291 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-secret-key\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996318 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-combined-ca-bundle\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996359 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jmsv\" (UniqueName: \"kubernetes.io/projected/1c52d079-d9d5-469e-9319-08266bea1f82-kube-api-access-2jmsv\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996387 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c52d079-d9d5-469e-9319-08266bea1f82-horizon-tls-certs\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996415 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdjg6\" (UniqueName: \"kubernetes.io/projected/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-kube-api-access-pdjg6\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.996442 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1c52d079-d9d5-469e-9319-08266bea1f82-horizon-secret-key\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.997000 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-logs\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.998091 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-config-data\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:51 crc kubenswrapper[4851]: I0223 13:26:51.998534 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-scripts\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.006586 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-secret-key\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.006977 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-combined-ca-bundle\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.007091 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-tls-certs\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.015975 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdjg6\" (UniqueName: \"kubernetes.io/projected/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-kube-api-access-pdjg6\") pod \"horizon-69f9fbd4d-lldd8\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.097813 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1c52d079-d9d5-469e-9319-08266bea1f82-horizon-secret-key\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.098189 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c52d079-d9d5-469e-9319-08266bea1f82-combined-ca-bundle\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.098216 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c52d079-d9d5-469e-9319-08266bea1f82-logs\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.098239 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c52d079-d9d5-469e-9319-08266bea1f82-scripts\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.098264 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c52d079-d9d5-469e-9319-08266bea1f82-config-data\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.098343 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jmsv\" (UniqueName: \"kubernetes.io/projected/1c52d079-d9d5-469e-9319-08266bea1f82-kube-api-access-2jmsv\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.098376 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c52d079-d9d5-469e-9319-08266bea1f82-horizon-tls-certs\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.099961 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c52d079-d9d5-469e-9319-08266bea1f82-scripts\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.099997 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c52d079-d9d5-469e-9319-08266bea1f82-config-data\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.100300 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c52d079-d9d5-469e-9319-08266bea1f82-logs\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.103279 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c52d079-d9d5-469e-9319-08266bea1f82-horizon-tls-certs\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.108516 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1c52d079-d9d5-469e-9319-08266bea1f82-horizon-secret-key\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.108974 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c52d079-d9d5-469e-9319-08266bea1f82-combined-ca-bundle\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.127055 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jmsv\" (UniqueName: \"kubernetes.io/projected/1c52d079-d9d5-469e-9319-08266bea1f82-kube-api-access-2jmsv\") pod \"horizon-64f4c4f478-f578z\" (UID: \"1c52d079-d9d5-469e-9319-08266bea1f82\") " pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.141072 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:26:52 crc kubenswrapper[4851]: I0223 13:26:52.234611 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:26:54 crc kubenswrapper[4851]: I0223 13:26:54.140755 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:26:54 crc kubenswrapper[4851]: I0223 13:26:54.196990 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rpncf"] Feb 23 13:26:54 crc kubenswrapper[4851]: I0223 13:26:54.197301 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="dnsmasq-dns" containerID="cri-o://511c696e9e1cf6fe368eb437386798f75a8acc628866f6473d06ea297f5e5646" gracePeriod=10 Feb 23 13:26:54 crc kubenswrapper[4851]: I0223 13:26:54.257460 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 23 13:26:55 crc kubenswrapper[4851]: I0223 13:26:55.242745 4851 generic.go:334] "Generic (PLEG): container finished" podID="f5fa0050-b730-4479-8add-4c4212c014d1" containerID="511c696e9e1cf6fe368eb437386798f75a8acc628866f6473d06ea297f5e5646" exitCode=0 Feb 23 13:26:55 crc kubenswrapper[4851]: I0223 13:26:55.242787 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" event={"ID":"f5fa0050-b730-4479-8add-4c4212c014d1","Type":"ContainerDied","Data":"511c696e9e1cf6fe368eb437386798f75a8acc628866f6473d06ea297f5e5646"} Feb 23 13:26:59 crc kubenswrapper[4851]: I0223 13:26:59.257357 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 23 13:27:01 crc kubenswrapper[4851]: E0223 13:27:01.729173 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 23 13:27:01 crc kubenswrapper[4851]: E0223 13:27:01.729759 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hftbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-8l7sd_openstack(20225786-c4f3-48e3-8719-d0710aeb3655): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:27:01 crc kubenswrapper[4851]: E0223 13:27:01.731001 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-8l7sd" podUID="20225786-c4f3-48e3-8719-d0710aeb3655" Feb 23 13:27:02 crc kubenswrapper[4851]: I0223 13:27:02.298594 4851 generic.go:334] "Generic (PLEG): container finished" podID="322bc2f6-b9c6-4769-bc8c-fa7974459069" containerID="5393f8055167ffe99ba5e48976a116e48c79832c72e4bfc51819f2c0d22161ea" exitCode=0 Feb 23 13:27:02 crc kubenswrapper[4851]: I0223 13:27:02.298674 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-24vww" event={"ID":"322bc2f6-b9c6-4769-bc8c-fa7974459069","Type":"ContainerDied","Data":"5393f8055167ffe99ba5e48976a116e48c79832c72e4bfc51819f2c0d22161ea"} Feb 23 13:27:02 crc kubenswrapper[4851]: E0223 13:27:02.300486 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-8l7sd" podUID="20225786-c4f3-48e3-8719-d0710aeb3655" Feb 23 13:27:04 crc kubenswrapper[4851]: I0223 13:27:04.257121 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Feb 23 13:27:04 crc kubenswrapper[4851]: I0223 13:27:04.257720 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:27:07 crc kubenswrapper[4851]: E0223 13:27:07.867295 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 23 13:27:07 crc kubenswrapper[4851]: E0223 13:27:07.867790 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64h684hdhffh5d5h89h564h687h55bh5dfhf9h697hb4h596h89h689hd5h5d8h79h5cbh54h55ch65fh7dh598h66dhch5f4h549h67hbdh64q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2b7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-77dc8cd779-8bfdj_openstack(990d8cf1-3b35-43b4-ba97-ed3b73f92ba3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:27:07 crc kubenswrapper[4851]: E0223 13:27:07.869720 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-77dc8cd779-8bfdj" podUID="990d8cf1-3b35-43b4-ba97-ed3b73f92ba3" Feb 23 13:27:07 crc kubenswrapper[4851]: I0223 13:27:07.942723 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.094932 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-config-data\") pod \"5d1392f1-78c4-4a65-98f9-140ac98cb262\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.094990 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r68w7\" (UniqueName: \"kubernetes.io/projected/5d1392f1-78c4-4a65-98f9-140ac98cb262-kube-api-access-r68w7\") pod \"5d1392f1-78c4-4a65-98f9-140ac98cb262\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.095045 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-combined-ca-bundle\") pod \"5d1392f1-78c4-4a65-98f9-140ac98cb262\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.095840 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-scripts\") pod \"5d1392f1-78c4-4a65-98f9-140ac98cb262\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.096239 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-fernet-keys\") pod \"5d1392f1-78c4-4a65-98f9-140ac98cb262\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.096269 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-credential-keys\") pod \"5d1392f1-78c4-4a65-98f9-140ac98cb262\" (UID: \"5d1392f1-78c4-4a65-98f9-140ac98cb262\") " Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.101085 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-scripts" (OuterVolumeSpecName: "scripts") pod "5d1392f1-78c4-4a65-98f9-140ac98cb262" (UID: "5d1392f1-78c4-4a65-98f9-140ac98cb262"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.101168 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5d1392f1-78c4-4a65-98f9-140ac98cb262" (UID: "5d1392f1-78c4-4a65-98f9-140ac98cb262"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.102383 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5d1392f1-78c4-4a65-98f9-140ac98cb262" (UID: "5d1392f1-78c4-4a65-98f9-140ac98cb262"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.102541 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d1392f1-78c4-4a65-98f9-140ac98cb262-kube-api-access-r68w7" (OuterVolumeSpecName: "kube-api-access-r68w7") pod "5d1392f1-78c4-4a65-98f9-140ac98cb262" (UID: "5d1392f1-78c4-4a65-98f9-140ac98cb262"). InnerVolumeSpecName "kube-api-access-r68w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.119653 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-config-data" (OuterVolumeSpecName: "config-data") pod "5d1392f1-78c4-4a65-98f9-140ac98cb262" (UID: "5d1392f1-78c4-4a65-98f9-140ac98cb262"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.120490 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d1392f1-78c4-4a65-98f9-140ac98cb262" (UID: "5d1392f1-78c4-4a65-98f9-140ac98cb262"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.198384 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.198408 4851 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.198421 4851 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.198430 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.198439 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r68w7\" (UniqueName: \"kubernetes.io/projected/5d1392f1-78c4-4a65-98f9-140ac98cb262-kube-api-access-r68w7\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.198450 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1392f1-78c4-4a65-98f9-140ac98cb262-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.346243 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jdl7s" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.346231 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jdl7s" event={"ID":"5d1392f1-78c4-4a65-98f9-140ac98cb262","Type":"ContainerDied","Data":"5a30f9d30ffc4f0923ba478e44647633a17b920ecb0b34325be06f878835e1c0"} Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.346370 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a30f9d30ffc4f0923ba478e44647633a17b920ecb0b34325be06f878835e1c0" Feb 23 13:27:08 crc kubenswrapper[4851]: E0223 13:27:08.390665 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 23 13:27:08 crc kubenswrapper[4851]: E0223 13:27:08.390801 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-htgg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-466sr_openstack(23eaed53-2c8f-46ae-bc53-87ab7855282a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:27:08 crc kubenswrapper[4851]: E0223 13:27:08.392306 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-466sr" podUID="23eaed53-2c8f-46ae-bc53-87ab7855282a" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.464719 4851 scope.go:117] "RemoveContainer" containerID="575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.513741 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-24vww" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.705782 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l872n\" (UniqueName: \"kubernetes.io/projected/322bc2f6-b9c6-4769-bc8c-fa7974459069-kube-api-access-l872n\") pod \"322bc2f6-b9c6-4769-bc8c-fa7974459069\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.706095 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-config\") pod \"322bc2f6-b9c6-4769-bc8c-fa7974459069\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.706154 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-combined-ca-bundle\") pod \"322bc2f6-b9c6-4769-bc8c-fa7974459069\" (UID: \"322bc2f6-b9c6-4769-bc8c-fa7974459069\") " Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.712446 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322bc2f6-b9c6-4769-bc8c-fa7974459069-kube-api-access-l872n" (OuterVolumeSpecName: "kube-api-access-l872n") pod "322bc2f6-b9c6-4769-bc8c-fa7974459069" (UID: "322bc2f6-b9c6-4769-bc8c-fa7974459069"). InnerVolumeSpecName "kube-api-access-l872n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.732230 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-config" (OuterVolumeSpecName: "config") pod "322bc2f6-b9c6-4769-bc8c-fa7974459069" (UID: "322bc2f6-b9c6-4769-bc8c-fa7974459069"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.738200 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "322bc2f6-b9c6-4769-bc8c-fa7974459069" (UID: "322bc2f6-b9c6-4769-bc8c-fa7974459069"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.807961 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l872n\" (UniqueName: \"kubernetes.io/projected/322bc2f6-b9c6-4769-bc8c-fa7974459069-kube-api-access-l872n\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.807990 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:08 crc kubenswrapper[4851]: I0223 13:27:08.808001 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322bc2f6-b9c6-4769-bc8c-fa7974459069-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.032227 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jdl7s"] Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.040989 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jdl7s"] Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.117980 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mkz2c"] Feb 23 13:27:09 crc kubenswrapper[4851]: E0223 13:27:09.118436 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322bc2f6-b9c6-4769-bc8c-fa7974459069" containerName="neutron-db-sync" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.118459 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="322bc2f6-b9c6-4769-bc8c-fa7974459069" containerName="neutron-db-sync" Feb 23 13:27:09 crc kubenswrapper[4851]: E0223 13:27:09.118478 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1392f1-78c4-4a65-98f9-140ac98cb262" containerName="keystone-bootstrap" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.118486 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1392f1-78c4-4a65-98f9-140ac98cb262" containerName="keystone-bootstrap" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.118747 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="322bc2f6-b9c6-4769-bc8c-fa7974459069" containerName="neutron-db-sync" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.118770 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1392f1-78c4-4a65-98f9-140ac98cb262" containerName="keystone-bootstrap" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.120465 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.123206 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.123310 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4gbxh" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.123316 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.124193 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.124692 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.130970 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mkz2c"] Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.215449 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-config-data\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.215594 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-fernet-keys\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.215624 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-credential-keys\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.215664 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-scripts\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.215747 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-combined-ca-bundle\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.215777 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v6vr\" (UniqueName: \"kubernetes.io/projected/601a9699-38b0-449c-9e0a-1705b5a174a4-kube-api-access-6v6vr\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.317822 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-combined-ca-bundle\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.317877 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v6vr\" (UniqueName: \"kubernetes.io/projected/601a9699-38b0-449c-9e0a-1705b5a174a4-kube-api-access-6v6vr\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.317926 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-config-data\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.317999 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-fernet-keys\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.318031 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-credential-keys\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.318048 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-scripts\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.321782 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-credential-keys\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.321894 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-scripts\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.322450 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-fernet-keys\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.323366 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-combined-ca-bundle\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.326841 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-config-data\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.342722 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v6vr\" (UniqueName: \"kubernetes.io/projected/601a9699-38b0-449c-9e0a-1705b5a174a4-kube-api-access-6v6vr\") pod \"keystone-bootstrap-mkz2c\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.355570 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-24vww" event={"ID":"322bc2f6-b9c6-4769-bc8c-fa7974459069","Type":"ContainerDied","Data":"a7a6671b1d161befe2a3ecf64877c47271649a5a642a176ef47d02e200a02722"} Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.355602 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-24vww" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.355614 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7a6671b1d161befe2a3ecf64877c47271649a5a642a176ef47d02e200a02722" Feb 23 13:27:09 crc kubenswrapper[4851]: E0223 13:27:09.363418 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-466sr" podUID="23eaed53-2c8f-46ae-bc53-87ab7855282a" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.449272 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.653339 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cb5wt"] Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.654829 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: E0223 13:27:09.671498 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 23 13:27:09 crc kubenswrapper[4851]: E0223 13:27:09.671655 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-74xjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-n6qtq_openstack(e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:27:09 crc kubenswrapper[4851]: E0223 13:27:09.673182 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-n6qtq" podUID="e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.683073 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cb5wt"] Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.684851 4851 scope.go:117] "RemoveContainer" containerID="fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97" Feb 23 13:27:09 crc kubenswrapper[4851]: E0223 13:27:09.685346 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97\": container with ID starting with fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97 not found: ID does not exist" containerID="fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.685393 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97"} err="failed to get container status \"fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97\": rpc error: code = NotFound desc = could not find container \"fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97\": container with ID starting with fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97 not found: ID does not exist" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.685424 4851 scope.go:117] "RemoveContainer" containerID="575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a" Feb 23 13:27:09 crc kubenswrapper[4851]: E0223 13:27:09.689495 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a\": container with ID starting with 575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a not found: ID does not exist" containerID="575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.689541 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a"} err="failed to get container status \"575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a\": rpc error: code = NotFound desc = could not find container \"575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a\": container with ID starting with 575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a not found: ID does not exist" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.689567 4851 scope.go:117] "RemoveContainer" containerID="fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.691405 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97"} err="failed to get container status \"fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97\": rpc error: code = NotFound desc = could not find container \"fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97\": container with ID starting with fa999d6001743565fb0b320bfd344d5b0063b81aa46e2aec59685c3abeb38f97 not found: ID does not exist" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.691445 4851 scope.go:117] "RemoveContainer" containerID="575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.698131 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a"} err="failed to get container status \"575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a\": rpc error: code = NotFound desc = could not find container \"575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a\": container with ID starting with 575961e4a2d3dacee408845ede7795725dd3fa7a010c7344afa272b400a2d31a not found: ID does not exist" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.763683 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f5f47d7dd-d76bg"] Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.765321 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.774799 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.775135 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.775464 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.775617 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.775760 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4r9pp" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.780780 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.783490 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.791448 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f5f47d7dd-d76bg"] Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.836397 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.836462 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-svc\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.836500 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-config\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.836711 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.836807 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.836841 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4nxs\" (UniqueName: \"kubernetes.io/projected/8b4db413-40fb-450b-8e21-445e63d1963c-kube-api-access-r4nxs\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.938860 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2b7m\" (UniqueName: \"kubernetes.io/projected/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-kube-api-access-h2b7m\") pod \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.938938 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-internal-tls-certs\") pod \"d02e395a-d7a9-4603-be40-0743b00c9cbd\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.938968 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-logs\") pod \"d02e395a-d7a9-4603-be40-0743b00c9cbd\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939004 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-config-data\") pod \"d02e395a-d7a9-4603-be40-0743b00c9cbd\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939035 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-svc\") pod \"f5fa0050-b730-4479-8add-4c4212c014d1\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939057 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-scripts\") pod \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939086 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-sb\") pod \"f5fa0050-b730-4479-8add-4c4212c014d1\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939128 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-config-data\") pod \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939176 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxwvw\" (UniqueName: \"kubernetes.io/projected/d02e395a-d7a9-4603-be40-0743b00c9cbd-kube-api-access-rxwvw\") pod \"d02e395a-d7a9-4603-be40-0743b00c9cbd\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939208 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-horizon-secret-key\") pod \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939242 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dx9n\" (UniqueName: \"kubernetes.io/projected/f5fa0050-b730-4479-8add-4c4212c014d1-kube-api-access-9dx9n\") pod \"f5fa0050-b730-4479-8add-4c4212c014d1\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939293 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-logs\") pod \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\" (UID: \"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.939981 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-logs" (OuterVolumeSpecName: "logs") pod "d02e395a-d7a9-4603-be40-0743b00c9cbd" (UID: "d02e395a-d7a9-4603-be40-0743b00c9cbd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.940292 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-scripts" (OuterVolumeSpecName: "scripts") pod "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3" (UID: "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941320 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"d02e395a-d7a9-4603-be40-0743b00c9cbd\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941390 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-swift-storage-0\") pod \"f5fa0050-b730-4479-8add-4c4212c014d1\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941419 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-httpd-run\") pod \"d02e395a-d7a9-4603-be40-0743b00c9cbd\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941446 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-scripts\") pod \"d02e395a-d7a9-4603-be40-0743b00c9cbd\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941471 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-combined-ca-bundle\") pod \"d02e395a-d7a9-4603-be40-0743b00c9cbd\" (UID: \"d02e395a-d7a9-4603-be40-0743b00c9cbd\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941497 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-nb\") pod \"f5fa0050-b730-4479-8add-4c4212c014d1\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941515 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-config\") pod \"f5fa0050-b730-4479-8add-4c4212c014d1\" (UID: \"f5fa0050-b730-4479-8add-4c4212c014d1\") " Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941753 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-config\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941785 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941824 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4nxs\" (UniqueName: \"kubernetes.io/projected/8b4db413-40fb-450b-8e21-445e63d1963c-kube-api-access-r4nxs\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941893 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-httpd-config\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.941916 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-ovndb-tls-certs\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942036 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942068 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-svc\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942095 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-config\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942115 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942140 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-combined-ca-bundle\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942166 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27wgm\" (UniqueName: \"kubernetes.io/projected/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-kube-api-access-27wgm\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942230 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942241 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942594 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-logs" (OuterVolumeSpecName: "logs") pod "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3" (UID: "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.942841 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-config-data" (OuterVolumeSpecName: "config-data") pod "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3" (UID: "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.943658 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.944195 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.946097 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d02e395a-d7a9-4603-be40-0743b00c9cbd" (UID: "d02e395a-d7a9-4603-be40-0743b00c9cbd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.950175 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-config\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.950696 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.951063 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-svc\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.958633 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3" (UID: "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.970569 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-scripts" (OuterVolumeSpecName: "scripts") pod "d02e395a-d7a9-4603-be40-0743b00c9cbd" (UID: "d02e395a-d7a9-4603-be40-0743b00c9cbd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.970587 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d02e395a-d7a9-4603-be40-0743b00c9cbd-kube-api-access-rxwvw" (OuterVolumeSpecName: "kube-api-access-rxwvw") pod "d02e395a-d7a9-4603-be40-0743b00c9cbd" (UID: "d02e395a-d7a9-4603-be40-0743b00c9cbd"). InnerVolumeSpecName "kube-api-access-rxwvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.970662 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-kube-api-access-h2b7m" (OuterVolumeSpecName: "kube-api-access-h2b7m") pod "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3" (UID: "990d8cf1-3b35-43b4-ba97-ed3b73f92ba3"). InnerVolumeSpecName "kube-api-access-h2b7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.970578 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "d02e395a-d7a9-4603-be40-0743b00c9cbd" (UID: "d02e395a-d7a9-4603-be40-0743b00c9cbd"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 13:27:09 crc kubenswrapper[4851]: I0223 13:27:09.975262 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4nxs\" (UniqueName: \"kubernetes.io/projected/8b4db413-40fb-450b-8e21-445e63d1963c-kube-api-access-r4nxs\") pod \"dnsmasq-dns-55f844cf75-cb5wt\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.014517 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d1392f1-78c4-4a65-98f9-140ac98cb262" path="/var/lib/kubelet/pods/5d1392f1-78c4-4a65-98f9-140ac98cb262/volumes" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.054548 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fa0050-b730-4479-8add-4c4212c014d1-kube-api-access-9dx9n" (OuterVolumeSpecName: "kube-api-access-9dx9n") pod "f5fa0050-b730-4479-8add-4c4212c014d1" (UID: "f5fa0050-b730-4479-8add-4c4212c014d1"). InnerVolumeSpecName "kube-api-access-9dx9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076010 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-combined-ca-bundle\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076239 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27wgm\" (UniqueName: \"kubernetes.io/projected/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-kube-api-access-27wgm\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076276 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-config\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076340 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-httpd-config\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076362 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-ovndb-tls-certs\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076460 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076469 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxwvw\" (UniqueName: \"kubernetes.io/projected/d02e395a-d7a9-4603-be40-0743b00c9cbd-kube-api-access-rxwvw\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076479 4851 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076489 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dx9n\" (UniqueName: \"kubernetes.io/projected/f5fa0050-b730-4479-8add-4c4212c014d1-kube-api-access-9dx9n\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076497 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076727 4851 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076745 4851 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d02e395a-d7a9-4603-be40-0743b00c9cbd-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076754 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.076763 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2b7m\" (UniqueName: \"kubernetes.io/projected/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3-kube-api-access-h2b7m\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.100229 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.177055 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-ovndb-tls-certs\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.180021 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-combined-ca-bundle\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.181552 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-httpd-config\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.184887 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27wgm\" (UniqueName: \"kubernetes.io/projected/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-kube-api-access-27wgm\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.203906 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-config\") pod \"neutron-7f5f47d7dd-d76bg\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.234056 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f5fa0050-b730-4479-8add-4c4212c014d1" (UID: "f5fa0050-b730-4479-8add-4c4212c014d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.238027 4851 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.256642 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64f4c4f478-f578z"] Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.281129 4851 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.281247 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: W0223 13:27:10.308981 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c52d079_d9d5_469e_9319_08266bea1f82.slice/crio-02b7aeb1b1a306a1ae0e16d6ba0e72736c1e818eb9ce8052cd5dad38468d1ce5 WatchSource:0}: Error finding container 02b7aeb1b1a306a1ae0e16d6ba0e72736c1e818eb9ce8052cd5dad38468d1ce5: Status 404 returned error can't find the container with id 02b7aeb1b1a306a1ae0e16d6ba0e72736c1e818eb9ce8052cd5dad38468d1ce5 Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.361875 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d02e395a-d7a9-4603-be40-0743b00c9cbd" (UID: "d02e395a-d7a9-4603-be40-0743b00c9cbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.384254 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.392879 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" event={"ID":"f5fa0050-b730-4479-8add-4c4212c014d1","Type":"ContainerDied","Data":"c98e768cf0a1df4a88f8e11f8f46a81f7dac8fdefc79dbf349fd2511cb86f91a"} Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.393184 4851 scope.go:117] "RemoveContainer" containerID="511c696e9e1cf6fe368eb437386798f75a8acc628866f6473d06ea297f5e5646" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.393385 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.405372 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d02e395a-d7a9-4603-be40-0743b00c9cbd","Type":"ContainerDied","Data":"a17a2ece65c532f488740aa5027376a47a099794f532b9f093edcf67371b9cf2"} Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.405683 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.422712 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64f4c4f478-f578z" event={"ID":"1c52d079-d9d5-469e-9319-08266bea1f82","Type":"ContainerStarted","Data":"02b7aeb1b1a306a1ae0e16d6ba0e72736c1e818eb9ce8052cd5dad38468d1ce5"} Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.442936 4851 scope.go:117] "RemoveContainer" containerID="144864ee25d981e0939e4e111e3d259ce448d9979d8e8071600a1273bf662d32" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.443165 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77dc8cd779-8bfdj" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.443948 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77dc8cd779-8bfdj" event={"ID":"990d8cf1-3b35-43b4-ba97-ed3b73f92ba3","Type":"ContainerDied","Data":"51a50302cc1ca89325a046f7814873339fe921efbcbaf6451180a5ae6c431b5a"} Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.443954 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.455302 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-config" (OuterVolumeSpecName: "config") pod "f5fa0050-b730-4479-8add-4c4212c014d1" (UID: "f5fa0050-b730-4479-8add-4c4212c014d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:10 crc kubenswrapper[4851]: E0223 13:27:10.473671 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod990d8cf1_3b35_43b4_ba97_ed3b73f92ba3.slice\": RecentStats: unable to find data in memory cache]" Feb 23 13:27:10 crc kubenswrapper[4851]: E0223 13:27:10.481798 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-n6qtq" podUID="e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.486108 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.509636 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.533343 4851 scope.go:117] "RemoveContainer" containerID="6491fe0823a1060ae7c3eb6119b2d94e07ac56c145642b624956b8445d8df23d" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.537419 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77dc8cd779-8bfdj"] Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.543385 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f5fa0050-b730-4479-8add-4c4212c014d1" (UID: "f5fa0050-b730-4479-8add-4c4212c014d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.545198 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77dc8cd779-8bfdj"] Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.592687 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.593272 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mkz2c"] Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.635861 4851 scope.go:117] "RemoveContainer" containerID="f98fa9387243f03027cca3b74fe55b0df752356d075b3446d58cd84a11bf66b2" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.646300 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69f9fbd4d-lldd8"] Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.653533 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f5fa0050-b730-4479-8add-4c4212c014d1" (UID: "f5fa0050-b730-4479-8add-4c4212c014d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.700955 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.719401 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-config-data" (OuterVolumeSpecName: "config-data") pod "d02e395a-d7a9-4603-be40-0743b00c9cbd" (UID: "d02e395a-d7a9-4603-be40-0743b00c9cbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.751558 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d02e395a-d7a9-4603-be40-0743b00c9cbd" (UID: "d02e395a-d7a9-4603-be40-0743b00c9cbd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.752166 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f5fa0050-b730-4479-8add-4c4212c014d1" (UID: "f5fa0050-b730-4479-8add-4c4212c014d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.803990 4851 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f5fa0050-b730-4479-8add-4c4212c014d1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.804017 4851 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.804026 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d02e395a-d7a9-4603-be40-0743b00c9cbd-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:10 crc kubenswrapper[4851]: I0223 13:27:10.809920 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cb5wt"] Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.159235 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.170053 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.184827 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rpncf"] Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.194245 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:27:11 crc kubenswrapper[4851]: E0223 13:27:11.194736 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="dnsmasq-dns" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.194758 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="dnsmasq-dns" Feb 23 13:27:11 crc kubenswrapper[4851]: E0223 13:27:11.194791 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerName="glance-log" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.194800 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerName="glance-log" Feb 23 13:27:11 crc kubenswrapper[4851]: E0223 13:27:11.194812 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="init" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.194820 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="init" Feb 23 13:27:11 crc kubenswrapper[4851]: E0223 13:27:11.194842 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerName="glance-httpd" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.194852 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerName="glance-httpd" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.195072 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerName="glance-httpd" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.195092 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="d02e395a-d7a9-4603-be40-0743b00c9cbd" containerName="glance-log" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.195113 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="dnsmasq-dns" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.196774 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.201432 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.202465 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-rpncf"] Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.204305 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.210431 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.253165 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f5f47d7dd-d76bg"] Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.311775 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-config-data\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.311825 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.311857 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.311877 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.311975 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-logs\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.312006 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-scripts\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.312028 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjx8m\" (UniqueName: \"kubernetes.io/projected/154ab902-181d-479d-b449-acc94531a235-kube-api-access-vjx8m\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.312053 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.413766 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-logs\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.413812 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-scripts\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.413833 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjx8m\" (UniqueName: \"kubernetes.io/projected/154ab902-181d-479d-b449-acc94531a235-kube-api-access-vjx8m\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.413858 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.413891 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-config-data\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.413912 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.413934 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.413957 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.414829 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.415409 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.415485 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-logs\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.418589 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-config-data\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.421144 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.422023 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-scripts\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.429017 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.437428 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjx8m\" (UniqueName: \"kubernetes.io/projected/154ab902-181d-479d-b449-acc94531a235-kube-api-access-vjx8m\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.447956 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f5f47d7dd-d76bg" event={"ID":"2a5eaa08-375b-4738-9b4c-0440dffbd7bf","Type":"ContainerStarted","Data":"4d4e935faaa42fb4e0c51c84dd12ab746dd3106a95aa6fb30e0cb9868f930f6c"} Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.454294 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f9fbd4d-lldd8" event={"ID":"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32","Type":"ContainerStarted","Data":"38acd9dd181607e86aa8c306f4837b75174349bff0e376b5aa0c30750acf7eb4"} Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.456787 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkz2c" event={"ID":"601a9699-38b0-449c-9e0a-1705b5a174a4","Type":"ContainerStarted","Data":"24031a8559c76406b424134aac008c815972f6e5a6292b2788cda30fb4a79386"} Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.458698 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5554597f7c-7r294" event={"ID":"81279875-8a74-47bf-900a-dcf56249c95b","Type":"ContainerStarted","Data":"e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8"} Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.463260 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"144b9733-ebbc-4841-99bc-5629575dbed3","Type":"ContainerStarted","Data":"b3a81fa660910a6c69a0c1c2a201b1578525a886be26396202f9f73e132e3792"} Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.469725 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerStarted","Data":"2b8525e5c754ff6b74baf460e2f9c2963e0849a2646e751cccef5f52c23320a0"} Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.472722 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-785dd4679c-lrw27" event={"ID":"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4","Type":"ContainerStarted","Data":"e632646867e97abfdbabbccc2035a2cbda5f741d333fae182aa2d58448ee3dd4"} Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.480453 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" event={"ID":"8b4db413-40fb-450b-8e21-445e63d1963c","Type":"ContainerStarted","Data":"50d8494090c54e623b90165bcb5b60926161eba2077bd2354e23c58b728222cf"} Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.487441 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64f4c4f478-f578z" event={"ID":"1c52d079-d9d5-469e-9319-08266bea1f82","Type":"ContainerStarted","Data":"948acf400be3c1d65ba28d3087ed101af72d3ce232fea7d2c197d69f1a0049b2"} Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.582630 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.865062 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.979946 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="990d8cf1-3b35-43b4-ba97-ed3b73f92ba3" path="/var/lib/kubelet/pods/990d8cf1-3b35-43b4-ba97-ed3b73f92ba3/volumes" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.980518 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d02e395a-d7a9-4603-be40-0743b00c9cbd" path="/var/lib/kubelet/pods/d02e395a-d7a9-4603-be40-0743b00c9cbd/volumes" Feb 23 13:27:11 crc kubenswrapper[4851]: I0223 13:27:11.981458 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" path="/var/lib/kubelet/pods/f5fa0050-b730-4479-8add-4c4212c014d1/volumes" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.281799 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cf9f55d6f-4t6cw"] Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.283864 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.285885 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.287021 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.293372 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cf9f55d6f-4t6cw"] Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.331152 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-public-tls-certs\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.331258 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-config\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.331296 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-httpd-config\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.331374 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-internal-tls-certs\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.331497 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-ovndb-tls-certs\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.331538 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9v24\" (UniqueName: \"kubernetes.io/projected/275f852c-2061-4175-bc10-0b502e44e587-kube-api-access-t9v24\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.331581 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-combined-ca-bundle\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.433039 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-internal-tls-certs\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.433108 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-ovndb-tls-certs\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.433136 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9v24\" (UniqueName: \"kubernetes.io/projected/275f852c-2061-4175-bc10-0b502e44e587-kube-api-access-t9v24\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.433169 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-combined-ca-bundle\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.433204 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-public-tls-certs\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.433297 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-config\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.433365 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-httpd-config\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.440361 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-combined-ca-bundle\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.443973 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-internal-tls-certs\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.444472 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-public-tls-certs\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.445500 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-httpd-config\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.446189 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-ovndb-tls-certs\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.452794 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9v24\" (UniqueName: \"kubernetes.io/projected/275f852c-2061-4175-bc10-0b502e44e587-kube-api-access-t9v24\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.454388 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-config\") pod \"neutron-cf9f55d6f-4t6cw\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.523421 4851 generic.go:334] "Generic (PLEG): container finished" podID="8b4db413-40fb-450b-8e21-445e63d1963c" containerID="45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7" exitCode=0 Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.524455 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" event={"ID":"8b4db413-40fb-450b-8e21-445e63d1963c","Type":"ContainerDied","Data":"45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.533696 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64f4c4f478-f578z" event={"ID":"1c52d079-d9d5-469e-9319-08266bea1f82","Type":"ContainerStarted","Data":"6bc28613253be26c7f0481335eb155b2c0a734fe218906312ef2b41429e1efe1"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.552315 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f5f47d7dd-d76bg" event={"ID":"2a5eaa08-375b-4738-9b4c-0440dffbd7bf","Type":"ContainerStarted","Data":"96ed0095741921188a8d41746ad066a62c194e8a164dea434629ddc034da5e22"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.552373 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f5f47d7dd-d76bg" event={"ID":"2a5eaa08-375b-4738-9b4c-0440dffbd7bf","Type":"ContainerStarted","Data":"9768cd599cd3d06879c56fe10af928af5abb04a302fa94a6f1919d0d955266fe"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.552991 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.571639 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f9fbd4d-lldd8" event={"ID":"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32","Type":"ContainerStarted","Data":"f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.571685 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f9fbd4d-lldd8" event={"ID":"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32","Type":"ContainerStarted","Data":"292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.577097 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkz2c" event={"ID":"601a9699-38b0-449c-9e0a-1705b5a174a4","Type":"ContainerStarted","Data":"05c0db6f5cb373a94b6b84602429998278895afcfbc11a4442337635af1f932e"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.580591 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5554597f7c-7r294" event={"ID":"81279875-8a74-47bf-900a-dcf56249c95b","Type":"ContainerStarted","Data":"3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.580710 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5554597f7c-7r294" podUID="81279875-8a74-47bf-900a-dcf56249c95b" containerName="horizon-log" containerID="cri-o://e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8" gracePeriod=30 Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.580921 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5554597f7c-7r294" podUID="81279875-8a74-47bf-900a-dcf56249c95b" containerName="horizon" containerID="cri-o://3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2" gracePeriod=30 Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.586223 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"144b9733-ebbc-4841-99bc-5629575dbed3","Type":"ContainerStarted","Data":"b6dd0f5a8c8536cb7fd7add95def9a464b9a45ef53a9835c82d8fd2c795736d2"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.586262 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"144b9733-ebbc-4841-99bc-5629575dbed3","Type":"ContainerStarted","Data":"0aa1687e1364e6a050cc97432d89b334b8cf605f2c08a7df3209d5c78b8e3daa"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.586375 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="144b9733-ebbc-4841-99bc-5629575dbed3" containerName="glance-log" containerID="cri-o://0aa1687e1364e6a050cc97432d89b334b8cf605f2c08a7df3209d5c78b8e3daa" gracePeriod=30 Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.586453 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="144b9733-ebbc-4841-99bc-5629575dbed3" containerName="glance-httpd" containerID="cri-o://b6dd0f5a8c8536cb7fd7add95def9a464b9a45ef53a9835c82d8fd2c795736d2" gracePeriod=30 Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.592908 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-64f4c4f478-f578z" podStartSLOduration=21.592877439 podStartE2EDuration="21.592877439s" podCreationTimestamp="2026-02-23 13:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:12.570120006 +0000 UTC m=+1187.251823694" watchObservedRunningTime="2026-02-23 13:27:12.592877439 +0000 UTC m=+1187.274581127" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.601113 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f5f47d7dd-d76bg" podStartSLOduration=3.601098151 podStartE2EDuration="3.601098151s" podCreationTimestamp="2026-02-23 13:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:12.594141024 +0000 UTC m=+1187.275844712" watchObservedRunningTime="2026-02-23 13:27:12.601098151 +0000 UTC m=+1187.282801819" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.607889 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.611966 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-785dd4679c-lrw27" event={"ID":"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4","Type":"ContainerStarted","Data":"1785af1a62c4b7ac03828a6889361660d23027e9ddd0a9cb48f611a2396a75fa"} Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.612134 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-785dd4679c-lrw27" podUID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerName="horizon-log" containerID="cri-o://e632646867e97abfdbabbccc2035a2cbda5f741d333fae182aa2d58448ee3dd4" gracePeriod=30 Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.612433 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-785dd4679c-lrw27" podUID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerName="horizon" containerID="cri-o://1785af1a62c4b7ac03828a6889361660d23027e9ddd0a9cb48f611a2396a75fa" gracePeriod=30 Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.624230 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mkz2c" podStartSLOduration=3.624212174 podStartE2EDuration="3.624212174s" podCreationTimestamp="2026-02-23 13:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:12.621526618 +0000 UTC m=+1187.303230316" watchObservedRunningTime="2026-02-23 13:27:12.624212174 +0000 UTC m=+1187.305915852" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.651301 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5554597f7c-7r294" podStartSLOduration=4.56334576 podStartE2EDuration="29.651284789s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="2026-02-23 13:26:44.520511869 +0000 UTC m=+1159.202215547" lastFinishedPulling="2026-02-23 13:27:09.608450898 +0000 UTC m=+1184.290154576" observedRunningTime="2026-02-23 13:27:12.642644585 +0000 UTC m=+1187.324348263" watchObservedRunningTime="2026-02-23 13:27:12.651284789 +0000 UTC m=+1187.332988467" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.676170 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69f9fbd4d-lldd8" podStartSLOduration=21.676151292 podStartE2EDuration="21.676151292s" podCreationTimestamp="2026-02-23 13:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:12.665877692 +0000 UTC m=+1187.347581390" watchObservedRunningTime="2026-02-23 13:27:12.676151292 +0000 UTC m=+1187.357854970" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.707439 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=23.707399395 podStartE2EDuration="23.707399395s" podCreationTimestamp="2026-02-23 13:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:12.697466444 +0000 UTC m=+1187.379170132" watchObservedRunningTime="2026-02-23 13:27:12.707399395 +0000 UTC m=+1187.389103073" Feb 23 13:27:12 crc kubenswrapper[4851]: I0223 13:27:12.733245 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-785dd4679c-lrw27" podStartSLOduration=5.482120457 podStartE2EDuration="29.733225055s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="2026-02-23 13:26:45.501443242 +0000 UTC m=+1160.183146920" lastFinishedPulling="2026-02-23 13:27:09.75254784 +0000 UTC m=+1184.434251518" observedRunningTime="2026-02-23 13:27:12.727710399 +0000 UTC m=+1187.409414077" watchObservedRunningTime="2026-02-23 13:27:12.733225055 +0000 UTC m=+1187.414928723" Feb 23 13:27:13 crc kubenswrapper[4851]: I0223 13:27:13.556629 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:27:13 crc kubenswrapper[4851]: W0223 13:27:13.562468 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod154ab902_181d_479d_b449_acc94531a235.slice/crio-0ad0539ca4a93dd44bceffea8b7fe9519cba5ff9a1737ea0cd849ce056e01bcd WatchSource:0}: Error finding container 0ad0539ca4a93dd44bceffea8b7fe9519cba5ff9a1737ea0cd849ce056e01bcd: Status 404 returned error can't find the container with id 0ad0539ca4a93dd44bceffea8b7fe9519cba5ff9a1737ea0cd849ce056e01bcd Feb 23 13:27:13 crc kubenswrapper[4851]: I0223 13:27:13.622914 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" event={"ID":"8b4db413-40fb-450b-8e21-445e63d1963c","Type":"ContainerStarted","Data":"e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301"} Feb 23 13:27:13 crc kubenswrapper[4851]: I0223 13:27:13.624434 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"154ab902-181d-479d-b449-acc94531a235","Type":"ContainerStarted","Data":"0ad0539ca4a93dd44bceffea8b7fe9519cba5ff9a1737ea0cd849ce056e01bcd"} Feb 23 13:27:13 crc kubenswrapper[4851]: I0223 13:27:13.626078 4851 generic.go:334] "Generic (PLEG): container finished" podID="144b9733-ebbc-4841-99bc-5629575dbed3" containerID="b6dd0f5a8c8536cb7fd7add95def9a464b9a45ef53a9835c82d8fd2c795736d2" exitCode=143 Feb 23 13:27:13 crc kubenswrapper[4851]: I0223 13:27:13.626106 4851 generic.go:334] "Generic (PLEG): container finished" podID="144b9733-ebbc-4841-99bc-5629575dbed3" containerID="0aa1687e1364e6a050cc97432d89b334b8cf605f2c08a7df3209d5c78b8e3daa" exitCode=143 Feb 23 13:27:13 crc kubenswrapper[4851]: I0223 13:27:13.626144 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"144b9733-ebbc-4841-99bc-5629575dbed3","Type":"ContainerDied","Data":"b6dd0f5a8c8536cb7fd7add95def9a464b9a45ef53a9835c82d8fd2c795736d2"} Feb 23 13:27:13 crc kubenswrapper[4851]: I0223 13:27:13.626187 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"144b9733-ebbc-4841-99bc-5629575dbed3","Type":"ContainerDied","Data":"0aa1687e1364e6a050cc97432d89b334b8cf605f2c08a7df3209d5c78b8e3daa"} Feb 23 13:27:13 crc kubenswrapper[4851]: I0223 13:27:13.636560 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:27:13 crc kubenswrapper[4851]: I0223 13:27:13.705918 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cf9f55d6f-4t6cw"] Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.257221 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-rpncf" podUID="f5fa0050-b730-4479-8add-4c4212c014d1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.352036 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.486478 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-logs\") pod \"144b9733-ebbc-4841-99bc-5629575dbed3\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.486593 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-public-tls-certs\") pod \"144b9733-ebbc-4841-99bc-5629575dbed3\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.486640 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptfm9\" (UniqueName: \"kubernetes.io/projected/144b9733-ebbc-4841-99bc-5629575dbed3-kube-api-access-ptfm9\") pod \"144b9733-ebbc-4841-99bc-5629575dbed3\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.486666 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-config-data\") pod \"144b9733-ebbc-4841-99bc-5629575dbed3\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.486797 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-combined-ca-bundle\") pod \"144b9733-ebbc-4841-99bc-5629575dbed3\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.486970 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-httpd-run\") pod \"144b9733-ebbc-4841-99bc-5629575dbed3\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.487137 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-scripts\") pod \"144b9733-ebbc-4841-99bc-5629575dbed3\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.487169 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"144b9733-ebbc-4841-99bc-5629575dbed3\" (UID: \"144b9733-ebbc-4841-99bc-5629575dbed3\") " Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.492502 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "144b9733-ebbc-4841-99bc-5629575dbed3" (UID: "144b9733-ebbc-4841-99bc-5629575dbed3"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.492734 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "144b9733-ebbc-4841-99bc-5629575dbed3" (UID: "144b9733-ebbc-4841-99bc-5629575dbed3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.492967 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-logs" (OuterVolumeSpecName: "logs") pod "144b9733-ebbc-4841-99bc-5629575dbed3" (UID: "144b9733-ebbc-4841-99bc-5629575dbed3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.503042 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144b9733-ebbc-4841-99bc-5629575dbed3-kube-api-access-ptfm9" (OuterVolumeSpecName: "kube-api-access-ptfm9") pod "144b9733-ebbc-4841-99bc-5629575dbed3" (UID: "144b9733-ebbc-4841-99bc-5629575dbed3"). InnerVolumeSpecName "kube-api-access-ptfm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.508562 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-scripts" (OuterVolumeSpecName: "scripts") pod "144b9733-ebbc-4841-99bc-5629575dbed3" (UID: "144b9733-ebbc-4841-99bc-5629575dbed3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.544133 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "144b9733-ebbc-4841-99bc-5629575dbed3" (UID: "144b9733-ebbc-4841-99bc-5629575dbed3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.565944 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-config-data" (OuterVolumeSpecName: "config-data") pod "144b9733-ebbc-4841-99bc-5629575dbed3" (UID: "144b9733-ebbc-4841-99bc-5629575dbed3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.583245 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.591943 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.592000 4851 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.592013 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.592022 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptfm9\" (UniqueName: \"kubernetes.io/projected/144b9733-ebbc-4841-99bc-5629575dbed3-kube-api-access-ptfm9\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.592033 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.592045 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.592055 4851 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/144b9733-ebbc-4841-99bc-5629575dbed3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.605998 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "144b9733-ebbc-4841-99bc-5629575dbed3" (UID: "144b9733-ebbc-4841-99bc-5629575dbed3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.635743 4851 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.656670 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerStarted","Data":"5bfe9161b6bb9d78efc19dae4b67f7da28e00b68c91c562a44e273b9ce05685f"} Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.669984 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf9f55d6f-4t6cw" event={"ID":"275f852c-2061-4175-bc10-0b502e44e587","Type":"ContainerStarted","Data":"2bdbd7caecfb0a5f62b765d64a6469ae3eb85a4655fc378595c991b8e9f0dfa6"} Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.679551 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.681905 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"144b9733-ebbc-4841-99bc-5629575dbed3","Type":"ContainerDied","Data":"b3a81fa660910a6c69a0c1c2a201b1578525a886be26396202f9f73e132e3792"} Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.681974 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.682003 4851 scope.go:117] "RemoveContainer" containerID="b6dd0f5a8c8536cb7fd7add95def9a464b9a45ef53a9835c82d8fd2c795736d2" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.693739 4851 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/144b9733-ebbc-4841-99bc-5629575dbed3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.693768 4851 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.708573 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" podStartSLOduration=5.708553158 podStartE2EDuration="5.708553158s" podCreationTimestamp="2026-02-23 13:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:14.698561906 +0000 UTC m=+1189.380265604" watchObservedRunningTime="2026-02-23 13:27:14.708553158 +0000 UTC m=+1189.390256836" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.716435 4851 scope.go:117] "RemoveContainer" containerID="0aa1687e1364e6a050cc97432d89b334b8cf605f2c08a7df3209d5c78b8e3daa" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.740625 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.754120 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.761316 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:27:14 crc kubenswrapper[4851]: E0223 13:27:14.762391 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144b9733-ebbc-4841-99bc-5629575dbed3" containerName="glance-httpd" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.762476 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="144b9733-ebbc-4841-99bc-5629575dbed3" containerName="glance-httpd" Feb 23 13:27:14 crc kubenswrapper[4851]: E0223 13:27:14.762554 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144b9733-ebbc-4841-99bc-5629575dbed3" containerName="glance-log" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.762640 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="144b9733-ebbc-4841-99bc-5629575dbed3" containerName="glance-log" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.763088 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="144b9733-ebbc-4841-99bc-5629575dbed3" containerName="glance-log" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.763212 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="144b9733-ebbc-4841-99bc-5629575dbed3" containerName="glance-httpd" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.764723 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.768800 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.769041 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.788486 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.897636 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.897776 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.897800 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9b2\" (UniqueName: \"kubernetes.io/projected/55c5d815-2740-4a04-aba6-b030687b69bb-kube-api-access-bj9b2\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.897823 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.897848 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.897915 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.897936 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.897952 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-logs\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.999431 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:14 crc kubenswrapper[4851]: I0223 13:27:14.999469 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9b2\" (UniqueName: \"kubernetes.io/projected/55c5d815-2740-4a04-aba6-b030687b69bb-kube-api-access-bj9b2\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.000369 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.000412 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.000570 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.000628 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.000651 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-logs\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.000782 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.001171 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.002514 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.003749 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-logs\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.005417 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.009145 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-config-data\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.009910 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.023998 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9b2\" (UniqueName: \"kubernetes.io/projected/55c5d815-2740-4a04-aba6-b030687b69bb-kube-api-access-bj9b2\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.031857 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-scripts\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.124716 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.396531 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.693162 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf9f55d6f-4t6cw" event={"ID":"275f852c-2061-4175-bc10-0b502e44e587","Type":"ContainerStarted","Data":"dc004bf07f04d3099dac40e6deb9ba3607ee5f3d136950a7dcb698f80b934aac"} Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.693203 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf9f55d6f-4t6cw" event={"ID":"275f852c-2061-4175-bc10-0b502e44e587","Type":"ContainerStarted","Data":"d9c34003fc467639e156191b838c11e289a33a1f0ad1869c9a50667c117a2240"} Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.694205 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.702793 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8l7sd" event={"ID":"20225786-c4f3-48e3-8719-d0710aeb3655","Type":"ContainerStarted","Data":"257cd63d2a8ced9088d12aef115576c1ec53c106e7724429fb32090800b6b4f1"} Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.706432 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"154ab902-181d-479d-b449-acc94531a235","Type":"ContainerStarted","Data":"bcac84db92448afeab719f76638f5c39c4fab233d29f104717d59e00f4f4d196"} Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.728734 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cf9f55d6f-4t6cw" podStartSLOduration=3.728718928 podStartE2EDuration="3.728718928s" podCreationTimestamp="2026-02-23 13:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:15.719434226 +0000 UTC m=+1190.401137904" watchObservedRunningTime="2026-02-23 13:27:15.728718928 +0000 UTC m=+1190.410422606" Feb 23 13:27:15 crc kubenswrapper[4851]: I0223 13:27:15.983692 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144b9733-ebbc-4841-99bc-5629575dbed3" path="/var/lib/kubelet/pods/144b9733-ebbc-4841-99bc-5629575dbed3/volumes" Feb 23 13:27:16 crc kubenswrapper[4851]: I0223 13:27:16.008679 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8l7sd" podStartSLOduration=2.792121931 podStartE2EDuration="33.0086608s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="2026-02-23 13:26:45.041107657 +0000 UTC m=+1159.722811335" lastFinishedPulling="2026-02-23 13:27:15.257646526 +0000 UTC m=+1189.939350204" observedRunningTime="2026-02-23 13:27:15.743838676 +0000 UTC m=+1190.425542344" watchObservedRunningTime="2026-02-23 13:27:16.0086608 +0000 UTC m=+1190.690364498" Feb 23 13:27:16 crc kubenswrapper[4851]: I0223 13:27:16.046376 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:27:16 crc kubenswrapper[4851]: I0223 13:27:16.737363 4851 generic.go:334] "Generic (PLEG): container finished" podID="601a9699-38b0-449c-9e0a-1705b5a174a4" containerID="05c0db6f5cb373a94b6b84602429998278895afcfbc11a4442337635af1f932e" exitCode=0 Feb 23 13:27:16 crc kubenswrapper[4851]: I0223 13:27:16.737553 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkz2c" event={"ID":"601a9699-38b0-449c-9e0a-1705b5a174a4","Type":"ContainerDied","Data":"05c0db6f5cb373a94b6b84602429998278895afcfbc11a4442337635af1f932e"} Feb 23 13:27:16 crc kubenswrapper[4851]: I0223 13:27:16.746501 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55c5d815-2740-4a04-aba6-b030687b69bb","Type":"ContainerStarted","Data":"da7c189540def8c362aff026d32261da7d2b3ac45d6d4a284ef7a0073250b678"} Feb 23 13:27:16 crc kubenswrapper[4851]: I0223 13:27:16.749354 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"154ab902-181d-479d-b449-acc94531a235","Type":"ContainerStarted","Data":"021f6b22de043f9421aa3387e408457b15cb1a439560806440749678a33ad98a"} Feb 23 13:27:16 crc kubenswrapper[4851]: I0223 13:27:16.821553 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.821526402 podStartE2EDuration="5.821526402s" podCreationTimestamp="2026-02-23 13:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:16.778920498 +0000 UTC m=+1191.460624186" watchObservedRunningTime="2026-02-23 13:27:16.821526402 +0000 UTC m=+1191.503230100" Feb 23 13:27:17 crc kubenswrapper[4851]: I0223 13:27:17.772057 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55c5d815-2740-4a04-aba6-b030687b69bb","Type":"ContainerStarted","Data":"71705e6de5619295269d61be1381c7af0f139a8bd398b834405b849c007f3961"} Feb 23 13:27:17 crc kubenswrapper[4851]: I0223 13:27:17.772386 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55c5d815-2740-4a04-aba6-b030687b69bb","Type":"ContainerStarted","Data":"2dd1343c1ba57065aa742f3d9e8b7ef9d19ed64ecbcf7380aec50b0007b0ec73"} Feb 23 13:27:17 crc kubenswrapper[4851]: I0223 13:27:17.811853 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.811831917 podStartE2EDuration="3.811831917s" podCreationTimestamp="2026-02-23 13:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:17.806534417 +0000 UTC m=+1192.488238095" watchObservedRunningTime="2026-02-23 13:27:17.811831917 +0000 UTC m=+1192.493535595" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.308314 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.476837 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v6vr\" (UniqueName: \"kubernetes.io/projected/601a9699-38b0-449c-9e0a-1705b5a174a4-kube-api-access-6v6vr\") pod \"601a9699-38b0-449c-9e0a-1705b5a174a4\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.477275 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-fernet-keys\") pod \"601a9699-38b0-449c-9e0a-1705b5a174a4\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.477301 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-scripts\") pod \"601a9699-38b0-449c-9e0a-1705b5a174a4\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.477362 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-credential-keys\") pod \"601a9699-38b0-449c-9e0a-1705b5a174a4\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.477387 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-combined-ca-bundle\") pod \"601a9699-38b0-449c-9e0a-1705b5a174a4\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.477446 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-config-data\") pod \"601a9699-38b0-449c-9e0a-1705b5a174a4\" (UID: \"601a9699-38b0-449c-9e0a-1705b5a174a4\") " Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.482936 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "601a9699-38b0-449c-9e0a-1705b5a174a4" (UID: "601a9699-38b0-449c-9e0a-1705b5a174a4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.486558 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-scripts" (OuterVolumeSpecName: "scripts") pod "601a9699-38b0-449c-9e0a-1705b5a174a4" (UID: "601a9699-38b0-449c-9e0a-1705b5a174a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.493733 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601a9699-38b0-449c-9e0a-1705b5a174a4-kube-api-access-6v6vr" (OuterVolumeSpecName: "kube-api-access-6v6vr") pod "601a9699-38b0-449c-9e0a-1705b5a174a4" (UID: "601a9699-38b0-449c-9e0a-1705b5a174a4"). InnerVolumeSpecName "kube-api-access-6v6vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.497494 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "601a9699-38b0-449c-9e0a-1705b5a174a4" (UID: "601a9699-38b0-449c-9e0a-1705b5a174a4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.515485 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-config-data" (OuterVolumeSpecName: "config-data") pod "601a9699-38b0-449c-9e0a-1705b5a174a4" (UID: "601a9699-38b0-449c-9e0a-1705b5a174a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.524121 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "601a9699-38b0-449c-9e0a-1705b5a174a4" (UID: "601a9699-38b0-449c-9e0a-1705b5a174a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.584910 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.584942 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v6vr\" (UniqueName: \"kubernetes.io/projected/601a9699-38b0-449c-9e0a-1705b5a174a4-kube-api-access-6v6vr\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.584956 4851 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.584965 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.584973 4851 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.584983 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a9699-38b0-449c-9e0a-1705b5a174a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.794719 4851 generic.go:334] "Generic (PLEG): container finished" podID="20225786-c4f3-48e3-8719-d0710aeb3655" containerID="257cd63d2a8ced9088d12aef115576c1ec53c106e7724429fb32090800b6b4f1" exitCode=0 Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.794799 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8l7sd" event={"ID":"20225786-c4f3-48e3-8719-d0710aeb3655","Type":"ContainerDied","Data":"257cd63d2a8ced9088d12aef115576c1ec53c106e7724429fb32090800b6b4f1"} Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.799285 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mkz2c" event={"ID":"601a9699-38b0-449c-9e0a-1705b5a174a4","Type":"ContainerDied","Data":"24031a8559c76406b424134aac008c815972f6e5a6292b2788cda30fb4a79386"} Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.799324 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24031a8559c76406b424134aac008c815972f6e5a6292b2788cda30fb4a79386" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.799355 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mkz2c" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.836753 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b7f866994-tdwdz"] Feb 23 13:27:18 crc kubenswrapper[4851]: E0223 13:27:18.837117 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601a9699-38b0-449c-9e0a-1705b5a174a4" containerName="keystone-bootstrap" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.837134 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="601a9699-38b0-449c-9e0a-1705b5a174a4" containerName="keystone-bootstrap" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.837315 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="601a9699-38b0-449c-9e0a-1705b5a174a4" containerName="keystone-bootstrap" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.837873 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.843143 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.843494 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.843701 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.844713 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.844752 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.844889 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4gbxh" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.862594 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b7f866994-tdwdz"] Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.991039 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-combined-ca-bundle\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.991095 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtvwx\" (UniqueName: \"kubernetes.io/projected/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-kube-api-access-qtvwx\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.991220 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-config-data\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.991278 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-internal-tls-certs\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.991298 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-scripts\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.991338 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-public-tls-certs\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.991380 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-credential-keys\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:18 crc kubenswrapper[4851]: I0223 13:27:18.991400 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-fernet-keys\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.093316 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-config-data\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.093405 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-scripts\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.093427 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-internal-tls-certs\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.093475 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-public-tls-certs\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.093514 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-credential-keys\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.093544 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-fernet-keys\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.093575 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-combined-ca-bundle\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.093601 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtvwx\" (UniqueName: \"kubernetes.io/projected/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-kube-api-access-qtvwx\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.099962 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-fernet-keys\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.102182 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-combined-ca-bundle\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.105290 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-credential-keys\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.107473 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-internal-tls-certs\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.109534 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-config-data\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.114630 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-scripts\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.115127 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtvwx\" (UniqueName: \"kubernetes.io/projected/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-kube-api-access-qtvwx\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.115157 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20cfa6bd-a3d2-4e2c-9655-6b4db78b1771-public-tls-certs\") pod \"keystone-b7f866994-tdwdz\" (UID: \"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771\") " pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:19 crc kubenswrapper[4851]: I0223 13:27:19.159394 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:20 crc kubenswrapper[4851]: I0223 13:27:20.103642 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:20 crc kubenswrapper[4851]: I0223 13:27:20.183587 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z2cqf"] Feb 23 13:27:20 crc kubenswrapper[4851]: I0223 13:27:20.183833 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" podUID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" containerName="dnsmasq-dns" containerID="cri-o://84769ef9511ed5a14d6f39adaace7b8cc5413fd3576a0e21e216d71f3cfefd1f" gracePeriod=10 Feb 23 13:27:20 crc kubenswrapper[4851]: I0223 13:27:20.821859 4851 generic.go:334] "Generic (PLEG): container finished" podID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" containerID="84769ef9511ed5a14d6f39adaace7b8cc5413fd3576a0e21e216d71f3cfefd1f" exitCode=0 Feb 23 13:27:20 crc kubenswrapper[4851]: I0223 13:27:20.822222 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" event={"ID":"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed","Type":"ContainerDied","Data":"84769ef9511ed5a14d6f39adaace7b8cc5413fd3576a0e21e216d71f3cfefd1f"} Feb 23 13:27:21 crc kubenswrapper[4851]: I0223 13:27:21.866239 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:21 crc kubenswrapper[4851]: I0223 13:27:21.866872 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:21 crc kubenswrapper[4851]: I0223 13:27:21.908025 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:21 crc kubenswrapper[4851]: I0223 13:27:21.955635 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:22 crc kubenswrapper[4851]: I0223 13:27:22.142158 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:27:22 crc kubenswrapper[4851]: I0223 13:27:22.142238 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:27:22 crc kubenswrapper[4851]: I0223 13:27:22.146047 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69f9fbd4d-lldd8" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 13:27:22 crc kubenswrapper[4851]: I0223 13:27:22.235435 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:27:22 crc kubenswrapper[4851]: I0223 13:27:22.235487 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:27:22 crc kubenswrapper[4851]: I0223 13:27:22.237064 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64f4c4f478-f578z" podUID="1c52d079-d9d5-469e-9319-08266bea1f82" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 23 13:27:22 crc kubenswrapper[4851]: I0223 13:27:22.838457 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:22 crc kubenswrapper[4851]: I0223 13:27:22.838708 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.494784 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8l7sd" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.497732 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.626455 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-config\") pod \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.626510 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20225786-c4f3-48e3-8719-d0710aeb3655-logs\") pod \"20225786-c4f3-48e3-8719-d0710aeb3655\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.626534 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-svc\") pod \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.626649 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-sb\") pod \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.626678 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-scripts\") pod \"20225786-c4f3-48e3-8719-d0710aeb3655\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.626715 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-combined-ca-bundle\") pod \"20225786-c4f3-48e3-8719-d0710aeb3655\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.626742 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-config-data\") pod \"20225786-c4f3-48e3-8719-d0710aeb3655\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.626818 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm6q4\" (UniqueName: \"kubernetes.io/projected/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-kube-api-access-cm6q4\") pod \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.626877 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-swift-storage-0\") pod \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.627036 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-nb\") pod \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\" (UID: \"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.627082 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hftbc\" (UniqueName: \"kubernetes.io/projected/20225786-c4f3-48e3-8719-d0710aeb3655-kube-api-access-hftbc\") pod \"20225786-c4f3-48e3-8719-d0710aeb3655\" (UID: \"20225786-c4f3-48e3-8719-d0710aeb3655\") " Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.631105 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20225786-c4f3-48e3-8719-d0710aeb3655-logs" (OuterVolumeSpecName: "logs") pod "20225786-c4f3-48e3-8719-d0710aeb3655" (UID: "20225786-c4f3-48e3-8719-d0710aeb3655"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.635673 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-scripts" (OuterVolumeSpecName: "scripts") pod "20225786-c4f3-48e3-8719-d0710aeb3655" (UID: "20225786-c4f3-48e3-8719-d0710aeb3655"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.635744 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20225786-c4f3-48e3-8719-d0710aeb3655-kube-api-access-hftbc" (OuterVolumeSpecName: "kube-api-access-hftbc") pod "20225786-c4f3-48e3-8719-d0710aeb3655" (UID: "20225786-c4f3-48e3-8719-d0710aeb3655"). InnerVolumeSpecName "kube-api-access-hftbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.637237 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-kube-api-access-cm6q4" (OuterVolumeSpecName: "kube-api-access-cm6q4") pod "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" (UID: "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed"). InnerVolumeSpecName "kube-api-access-cm6q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.663421 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-config-data" (OuterVolumeSpecName: "config-data") pod "20225786-c4f3-48e3-8719-d0710aeb3655" (UID: "20225786-c4f3-48e3-8719-d0710aeb3655"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.689799 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20225786-c4f3-48e3-8719-d0710aeb3655" (UID: "20225786-c4f3-48e3-8719-d0710aeb3655"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.697041 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-config" (OuterVolumeSpecName: "config") pod "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" (UID: "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.699744 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" (UID: "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.704193 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" (UID: "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.707204 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" (UID: "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.730980 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm6q4\" (UniqueName: \"kubernetes.io/projected/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-kube-api-access-cm6q4\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.731016 4851 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.731025 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.731033 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hftbc\" (UniqueName: \"kubernetes.io/projected/20225786-c4f3-48e3-8719-d0710aeb3655-kube-api-access-hftbc\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.731042 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.731052 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20225786-c4f3-48e3-8719-d0710aeb3655-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.731060 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.731068 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.731075 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.731086 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20225786-c4f3-48e3-8719-d0710aeb3655-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.752882 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" (UID: "f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.770251 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b7f866994-tdwdz"] Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.832246 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.856078 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerStarted","Data":"3c000c0abd243391ae984b2aa4be6488d9123e4845e04e9101947b96441d1e6e"} Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.857097 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-466sr" event={"ID":"23eaed53-2c8f-46ae-bc53-87ab7855282a","Type":"ContainerStarted","Data":"76fb212e0c75524cd48fa4aea5395cfe84d36839a062cc7617f48b797b3e9289"} Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.860323 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8l7sd" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.860399 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8l7sd" event={"ID":"20225786-c4f3-48e3-8719-d0710aeb3655","Type":"ContainerDied","Data":"f4ff11f3921b6e8edcae1cdb082cae575047368e7b3798f81fd9ddea27ffe381"} Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.860425 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ff11f3921b6e8edcae1cdb082cae575047368e7b3798f81fd9ddea27ffe381" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.862721 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" event={"ID":"f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed","Type":"ContainerDied","Data":"4d6707050a1df4aa024d30db1e9cad10c0aa304905ae3106e5d8cb22233fa30c"} Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.862758 4851 scope.go:117] "RemoveContainer" containerID="84769ef9511ed5a14d6f39adaace7b8cc5413fd3576a0e21e216d71f3cfefd1f" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.862768 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.869535 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b7f866994-tdwdz" event={"ID":"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771","Type":"ContainerStarted","Data":"468e8cf7cef663f95f6d1b9f46e557249db04de9b619793531d12f022f16bb5f"} Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.881456 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-466sr" podStartSLOduration=2.774841119 podStartE2EDuration="41.881436216s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="2026-02-23 13:26:45.222690275 +0000 UTC m=+1159.904393953" lastFinishedPulling="2026-02-23 13:27:24.329285372 +0000 UTC m=+1199.010989050" observedRunningTime="2026-02-23 13:27:24.874815629 +0000 UTC m=+1199.556519327" watchObservedRunningTime="2026-02-23 13:27:24.881436216 +0000 UTC m=+1199.563139884" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.897004 4851 scope.go:117] "RemoveContainer" containerID="f57844956c08dcd3e22c2fe98af08947b70d1ec43e0a8a04b8bb21f1f4cc6d5f" Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.939307 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z2cqf"] Feb 23 13:27:24 crc kubenswrapper[4851]: I0223 13:27:24.946303 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-z2cqf"] Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.015216 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.015297 4851 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.016807 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.396977 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.397306 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.447999 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.458940 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.574131 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d6d46b468-7drjb"] Feb 23 13:27:25 crc kubenswrapper[4851]: E0223 13:27:25.574466 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" containerName="dnsmasq-dns" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.574478 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" containerName="dnsmasq-dns" Feb 23 13:27:25 crc kubenswrapper[4851]: E0223 13:27:25.574507 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20225786-c4f3-48e3-8719-d0710aeb3655" containerName="placement-db-sync" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.574513 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="20225786-c4f3-48e3-8719-d0710aeb3655" containerName="placement-db-sync" Feb 23 13:27:25 crc kubenswrapper[4851]: E0223 13:27:25.574532 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" containerName="init" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.574538 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" containerName="init" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.574705 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="20225786-c4f3-48e3-8719-d0710aeb3655" containerName="placement-db-sync" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.574723 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" containerName="dnsmasq-dns" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.575689 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.578036 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rw7s7" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.578257 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.588123 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.588135 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.589208 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.628173 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d6d46b468-7drjb"] Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.652214 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62c9v\" (UniqueName: \"kubernetes.io/projected/4c14e85a-4380-49f8-8311-abcaa3587c47-kube-api-access-62c9v\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.652275 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-combined-ca-bundle\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.652317 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-scripts\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.652404 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-public-tls-certs\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.652435 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-config-data\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.652451 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c14e85a-4380-49f8-8311-abcaa3587c47-logs\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.652492 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-internal-tls-certs\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.754439 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62c9v\" (UniqueName: \"kubernetes.io/projected/4c14e85a-4380-49f8-8311-abcaa3587c47-kube-api-access-62c9v\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.754502 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-combined-ca-bundle\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.754542 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-scripts\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.754562 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-public-tls-certs\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.754604 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-config-data\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.754621 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c14e85a-4380-49f8-8311-abcaa3587c47-logs\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.754652 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-internal-tls-certs\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.756723 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c14e85a-4380-49f8-8311-abcaa3587c47-logs\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.760371 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-scripts\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.760627 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-public-tls-certs\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.762488 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-combined-ca-bundle\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.768582 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-internal-tls-certs\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.778927 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62c9v\" (UniqueName: \"kubernetes.io/projected/4c14e85a-4380-49f8-8311-abcaa3587c47-kube-api-access-62c9v\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.778989 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c14e85a-4380-49f8-8311-abcaa3587c47-config-data\") pod \"placement-7d6d46b468-7drjb\" (UID: \"4c14e85a-4380-49f8-8311-abcaa3587c47\") " pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.896735 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.907074 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n6qtq" event={"ID":"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c","Type":"ContainerStarted","Data":"44f2ac5fe295c1db8ff532f5586cc517bee4de8ad62e64660e2e5d5d08fd84aa"} Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.933505 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b7f866994-tdwdz" event={"ID":"20cfa6bd-a3d2-4e2c-9655-6b4db78b1771","Type":"ContainerStarted","Data":"e65f0823ea4e329bd63f216f7e291253f7cae9ba53b32937b5cb740e5c49a204"} Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.933552 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.933565 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.933575 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.941457 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-n6qtq" podStartSLOduration=4.15733168 podStartE2EDuration="42.941435972s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="2026-02-23 13:26:45.546498097 +0000 UTC m=+1160.228201775" lastFinishedPulling="2026-02-23 13:27:24.330602399 +0000 UTC m=+1199.012306067" observedRunningTime="2026-02-23 13:27:25.924866534 +0000 UTC m=+1200.606570212" watchObservedRunningTime="2026-02-23 13:27:25.941435972 +0000 UTC m=+1200.623139650" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.965718 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b7f866994-tdwdz" podStartSLOduration=7.965680217 podStartE2EDuration="7.965680217s" podCreationTimestamp="2026-02-23 13:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:25.962707033 +0000 UTC m=+1200.644410711" watchObservedRunningTime="2026-02-23 13:27:25.965680217 +0000 UTC m=+1200.647383905" Feb 23 13:27:25 crc kubenswrapper[4851]: I0223 13:27:25.991050 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" path="/var/lib/kubelet/pods/f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed/volumes" Feb 23 13:27:26 crc kubenswrapper[4851]: I0223 13:27:26.457759 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d6d46b468-7drjb"] Feb 23 13:27:26 crc kubenswrapper[4851]: I0223 13:27:26.950457 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d6d46b468-7drjb" event={"ID":"4c14e85a-4380-49f8-8311-abcaa3587c47","Type":"ContainerStarted","Data":"26d2fb703214339af0d8ad8feb58c012a0878d8251ccaa804641b0ea18c15fc2"} Feb 23 13:27:26 crc kubenswrapper[4851]: I0223 13:27:26.950993 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d6d46b468-7drjb" event={"ID":"4c14e85a-4380-49f8-8311-abcaa3587c47","Type":"ContainerStarted","Data":"781cb7e0882abcf88978fde64d79ee7be91fb8f2edcd1ffe99bf37b0f55c1baf"} Feb 23 13:27:27 crc kubenswrapper[4851]: I0223 13:27:27.962673 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d6d46b468-7drjb" event={"ID":"4c14e85a-4380-49f8-8311-abcaa3587c47","Type":"ContainerStarted","Data":"1e4ce700778a43664102c71d1d382cee571398df5056667ea009138789114c39"} Feb 23 13:27:27 crc kubenswrapper[4851]: I0223 13:27:27.962989 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:27 crc kubenswrapper[4851]: I0223 13:27:27.965734 4851 generic.go:334] "Generic (PLEG): container finished" podID="23eaed53-2c8f-46ae-bc53-87ab7855282a" containerID="76fb212e0c75524cd48fa4aea5395cfe84d36839a062cc7617f48b797b3e9289" exitCode=0 Feb 23 13:27:27 crc kubenswrapper[4851]: I0223 13:27:27.965802 4851 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:27:27 crc kubenswrapper[4851]: I0223 13:27:27.965810 4851 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:27:27 crc kubenswrapper[4851]: I0223 13:27:27.965837 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-466sr" event={"ID":"23eaed53-2c8f-46ae-bc53-87ab7855282a","Type":"ContainerDied","Data":"76fb212e0c75524cd48fa4aea5395cfe84d36839a062cc7617f48b797b3e9289"} Feb 23 13:27:27 crc kubenswrapper[4851]: I0223 13:27:27.993934 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d6d46b468-7drjb" podStartSLOduration=2.993914345 podStartE2EDuration="2.993914345s" podCreationTimestamp="2026-02-23 13:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:27.98699202 +0000 UTC m=+1202.668695718" watchObservedRunningTime="2026-02-23 13:27:27.993914345 +0000 UTC m=+1202.675618023" Feb 23 13:27:28 crc kubenswrapper[4851]: I0223 13:27:28.066255 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 13:27:28 crc kubenswrapper[4851]: I0223 13:27:28.328717 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 13:27:28 crc kubenswrapper[4851]: I0223 13:27:28.972419 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.140092 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-z2cqf" podUID="f2a9e3cd-a6d6-4834-91ec-a95b3fa190ed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.369950 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-466sr" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.442173 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-db-sync-config-data\") pod \"23eaed53-2c8f-46ae-bc53-87ab7855282a\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.442315 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-combined-ca-bundle\") pod \"23eaed53-2c8f-46ae-bc53-87ab7855282a\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.442366 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htgg8\" (UniqueName: \"kubernetes.io/projected/23eaed53-2c8f-46ae-bc53-87ab7855282a-kube-api-access-htgg8\") pod \"23eaed53-2c8f-46ae-bc53-87ab7855282a\" (UID: \"23eaed53-2c8f-46ae-bc53-87ab7855282a\") " Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.447764 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "23eaed53-2c8f-46ae-bc53-87ab7855282a" (UID: "23eaed53-2c8f-46ae-bc53-87ab7855282a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.455541 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23eaed53-2c8f-46ae-bc53-87ab7855282a-kube-api-access-htgg8" (OuterVolumeSpecName: "kube-api-access-htgg8") pod "23eaed53-2c8f-46ae-bc53-87ab7855282a" (UID: "23eaed53-2c8f-46ae-bc53-87ab7855282a"). InnerVolumeSpecName "kube-api-access-htgg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.474538 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23eaed53-2c8f-46ae-bc53-87ab7855282a" (UID: "23eaed53-2c8f-46ae-bc53-87ab7855282a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.544897 4851 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.544943 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23eaed53-2c8f-46ae-bc53-87ab7855282a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.544957 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htgg8\" (UniqueName: \"kubernetes.io/projected/23eaed53-2c8f-46ae-bc53-87ab7855282a-kube-api-access-htgg8\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.985761 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-466sr" event={"ID":"23eaed53-2c8f-46ae-bc53-87ab7855282a","Type":"ContainerDied","Data":"6c026c4a8c7a4c6b77862b621bd5c9d4e6c9a32ec933e328c616284f2ef3fc88"} Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.985796 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c026c4a8c7a4c6b77862b621bd5c9d4e6c9a32ec933e328c616284f2ef3fc88" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.985799 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-466sr" Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.988204 4851 generic.go:334] "Generic (PLEG): container finished" podID="e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" containerID="44f2ac5fe295c1db8ff532f5586cc517bee4de8ad62e64660e2e5d5d08fd84aa" exitCode=0 Feb 23 13:27:29 crc kubenswrapper[4851]: I0223 13:27:29.989012 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n6qtq" event={"ID":"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c","Type":"ContainerDied","Data":"44f2ac5fe295c1db8ff532f5586cc517bee4de8ad62e64660e2e5d5d08fd84aa"} Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.242393 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-66757bd65d-pn2zh"] Feb 23 13:27:30 crc kubenswrapper[4851]: E0223 13:27:30.242856 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23eaed53-2c8f-46ae-bc53-87ab7855282a" containerName="barbican-db-sync" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.242878 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="23eaed53-2c8f-46ae-bc53-87ab7855282a" containerName="barbican-db-sync" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.243105 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="23eaed53-2c8f-46ae-bc53-87ab7855282a" containerName="barbican-db-sync" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.244264 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.247168 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.247997 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-pr2tq" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.248105 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.248198 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7546f4466c-vlsxg"] Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.249531 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.251508 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.269986 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-66757bd65d-pn2zh"] Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.282388 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7546f4466c-vlsxg"] Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.361860 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42f43676-ccd3-45e3-b729-ab33430aca9a-logs\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.361915 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42f43676-ccd3-45e3-b729-ab33430aca9a-config-data-custom\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.361936 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-logs\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.361953 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ghjp\" (UniqueName: \"kubernetes.io/projected/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-kube-api-access-7ghjp\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.361967 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-config-data\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.362002 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-combined-ca-bundle\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.362031 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f43676-ccd3-45e3-b729-ab33430aca9a-config-data\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.362067 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfv4f\" (UniqueName: \"kubernetes.io/projected/42f43676-ccd3-45e3-b729-ab33430aca9a-kube-api-access-pfv4f\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.362095 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-config-data-custom\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.362122 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f43676-ccd3-45e3-b729-ab33430aca9a-combined-ca-bundle\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.399908 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s6f92"] Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.401413 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463286 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42f43676-ccd3-45e3-b729-ab33430aca9a-config-data-custom\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463363 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-logs\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463395 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ghjp\" (UniqueName: \"kubernetes.io/projected/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-kube-api-access-7ghjp\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463420 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-config-data\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463492 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463520 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-combined-ca-bundle\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463556 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-config\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463583 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f43676-ccd3-45e3-b729-ab33430aca9a-config-data\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463605 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6t9c\" (UniqueName: \"kubernetes.io/projected/85d9ee6a-3545-4095-8905-ca17f8690bca-kube-api-access-r6t9c\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463670 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfv4f\" (UniqueName: \"kubernetes.io/projected/42f43676-ccd3-45e3-b729-ab33430aca9a-kube-api-access-pfv4f\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463703 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-svc\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463738 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-config-data-custom\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463762 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463789 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463822 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f43676-ccd3-45e3-b729-ab33430aca9a-combined-ca-bundle\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.463882 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42f43676-ccd3-45e3-b729-ab33430aca9a-logs\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.464377 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42f43676-ccd3-45e3-b729-ab33430aca9a-logs\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.465419 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-logs\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.472738 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-config-data\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.484502 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-combined-ca-bundle\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.484571 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s6f92"] Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.486010 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42f43676-ccd3-45e3-b729-ab33430aca9a-config-data\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.486675 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42f43676-ccd3-45e3-b729-ab33430aca9a-combined-ca-bundle\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.487987 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42f43676-ccd3-45e3-b729-ab33430aca9a-config-data-custom\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.503121 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-config-data-custom\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.513874 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ghjp\" (UniqueName: \"kubernetes.io/projected/fc271fbe-58c9-4eca-adfd-63ff51aa46fa-kube-api-access-7ghjp\") pod \"barbican-worker-7546f4466c-vlsxg\" (UID: \"fc271fbe-58c9-4eca-adfd-63ff51aa46fa\") " pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.535251 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfv4f\" (UniqueName: \"kubernetes.io/projected/42f43676-ccd3-45e3-b729-ab33430aca9a-kube-api-access-pfv4f\") pod \"barbican-keystone-listener-66757bd65d-pn2zh\" (UID: \"42f43676-ccd3-45e3-b729-ab33430aca9a\") " pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.567229 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.567291 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-config\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.567315 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6t9c\" (UniqueName: \"kubernetes.io/projected/85d9ee6a-3545-4095-8905-ca17f8690bca-kube-api-access-r6t9c\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.567387 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-svc\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.567424 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.567443 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.567684 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.568912 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.569439 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-config\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.570226 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-svc\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.570629 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.570790 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.593867 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7546f4466c-vlsxg" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.600161 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6t9c\" (UniqueName: \"kubernetes.io/projected/85d9ee6a-3545-4095-8905-ca17f8690bca-kube-api-access-r6t9c\") pod \"dnsmasq-dns-85ff748b95-s6f92\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.726715 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.735964 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-769d54b998-47sgr"] Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.737410 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.741997 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.746484 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-769d54b998-47sgr"] Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.773440 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data-custom\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.773504 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6jhv\" (UniqueName: \"kubernetes.io/projected/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-kube-api-access-n6jhv\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.773543 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-logs\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.773582 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-combined-ca-bundle\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.773607 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.875801 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data-custom\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.875864 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6jhv\" (UniqueName: \"kubernetes.io/projected/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-kube-api-access-n6jhv\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.875903 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-logs\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.877728 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-combined-ca-bundle\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.877868 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.877536 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-logs\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.895998 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-combined-ca-bundle\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.912107 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.912594 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data-custom\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:30 crc kubenswrapper[4851]: I0223 13:27:30.920410 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6jhv\" (UniqueName: \"kubernetes.io/projected/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-kube-api-access-n6jhv\") pod \"barbican-api-769d54b998-47sgr\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:31 crc kubenswrapper[4851]: I0223 13:27:31.075892 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:31 crc kubenswrapper[4851]: I0223 13:27:31.396089 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-66757bd65d-pn2zh"] Feb 23 13:27:31 crc kubenswrapper[4851]: I0223 13:27:31.425998 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s6f92"] Feb 23 13:27:31 crc kubenswrapper[4851]: I0223 13:27:31.559633 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7546f4466c-vlsxg"] Feb 23 13:27:32 crc kubenswrapper[4851]: I0223 13:27:32.142042 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69f9fbd4d-lldd8" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 13:27:32 crc kubenswrapper[4851]: I0223 13:27:32.236004 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64f4c4f478-f578z" podUID="1c52d079-d9d5-469e-9319-08266bea1f82" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 23 13:27:32 crc kubenswrapper[4851]: I0223 13:27:32.994564 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bdd9b889b-qd9cm"] Feb 23 13:27:32 crc kubenswrapper[4851]: I0223 13:27:32.998950 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.004376 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.004563 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.018558 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bdd9b889b-qd9cm"] Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.124590 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-combined-ca-bundle\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.124675 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-config-data\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.124709 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6134ed19-8856-4c53-b30c-eee8089381fb-logs\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.124746 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-public-tls-certs\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.124770 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbprz\" (UniqueName: \"kubernetes.io/projected/6134ed19-8856-4c53-b30c-eee8089381fb-kube-api-access-lbprz\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.124916 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-config-data-custom\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.125023 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-internal-tls-certs\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.227234 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-config-data-custom\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.227289 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-internal-tls-certs\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.227555 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-combined-ca-bundle\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.227622 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-config-data\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.228922 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6134ed19-8856-4c53-b30c-eee8089381fb-logs\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.229170 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-public-tls-certs\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.229209 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbprz\" (UniqueName: \"kubernetes.io/projected/6134ed19-8856-4c53-b30c-eee8089381fb-kube-api-access-lbprz\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.229911 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6134ed19-8856-4c53-b30c-eee8089381fb-logs\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.233878 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-config-data-custom\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.233996 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-combined-ca-bundle\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.238027 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-public-tls-certs\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.242074 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-internal-tls-certs\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.244418 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6134ed19-8856-4c53-b30c-eee8089381fb-config-data\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.246707 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbprz\" (UniqueName: \"kubernetes.io/projected/6134ed19-8856-4c53-b30c-eee8089381fb-kube-api-access-lbprz\") pod \"barbican-api-6bdd9b889b-qd9cm\" (UID: \"6134ed19-8856-4c53-b30c-eee8089381fb\") " pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:33 crc kubenswrapper[4851]: I0223 13:27:33.329694 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.795643 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.882870 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-combined-ca-bundle\") pod \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.883164 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-scripts\") pod \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.883202 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74xjm\" (UniqueName: \"kubernetes.io/projected/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-kube-api-access-74xjm\") pod \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.883274 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-config-data\") pod \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.883289 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-etc-machine-id\") pod \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.883604 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" (UID: "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.885391 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-db-sync-config-data\") pod \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\" (UID: \"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c\") " Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.886108 4851 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.887389 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" (UID: "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.887406 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-kube-api-access-74xjm" (OuterVolumeSpecName: "kube-api-access-74xjm") pod "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" (UID: "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c"). InnerVolumeSpecName "kube-api-access-74xjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.887834 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-scripts" (OuterVolumeSpecName: "scripts") pod "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" (UID: "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.931988 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" (UID: "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.962033 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-config-data" (OuterVolumeSpecName: "config-data") pod "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" (UID: "e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.989570 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.989604 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.989614 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74xjm\" (UniqueName: \"kubernetes.io/projected/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-kube-api-access-74xjm\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.989622 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:35 crc kubenswrapper[4851]: I0223 13:27:35.989630 4851 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.096279 4851 generic.go:334] "Generic (PLEG): container finished" podID="85d9ee6a-3545-4095-8905-ca17f8690bca" containerID="d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2" exitCode=0 Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.096366 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" event={"ID":"85d9ee6a-3545-4095-8905-ca17f8690bca","Type":"ContainerDied","Data":"d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2"} Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.096652 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" event={"ID":"85d9ee6a-3545-4095-8905-ca17f8690bca","Type":"ContainerStarted","Data":"36c810410feea633c26936671411bbedb69953c6fa3f827ae7be579b32a634b6"} Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.099560 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n6qtq" event={"ID":"e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c","Type":"ContainerDied","Data":"200467c5dd0830690f8ac27bc980e33a004e3e355d1c6eb65a3a433d167ed0c0"} Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.099602 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="200467c5dd0830690f8ac27bc980e33a004e3e355d1c6eb65a3a433d167ed0c0" Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.099662 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n6qtq" Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.101805 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" event={"ID":"42f43676-ccd3-45e3-b729-ab33430aca9a","Type":"ContainerStarted","Data":"a8045be8a2f76b864909d3e1c60a40fa12b34bb7668c4db2df05fa84d18c0d94"} Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.103275 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7546f4466c-vlsxg" event={"ID":"fc271fbe-58c9-4eca-adfd-63ff51aa46fa","Type":"ContainerStarted","Data":"0f1ef2dd10da6a44c78a1ac246bd4284929ea8e687daf5450ee0ba00d1f840ee"} Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.180223 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bdd9b889b-qd9cm"] Feb 23 13:27:36 crc kubenswrapper[4851]: I0223 13:27:36.285275 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-769d54b998-47sgr"] Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.112695 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s6f92"] Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.131081 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 13:27:37 crc kubenswrapper[4851]: E0223 13:27:37.131805 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" containerName="cinder-db-sync" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.131893 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" containerName="cinder-db-sync" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.132136 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" containerName="cinder-db-sync" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.133156 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.146906 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.147179 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.147498 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.147692 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d82rc"] Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.150819 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-wp2ng" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.176576 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdd9b889b-qd9cm" event={"ID":"6134ed19-8856-4c53-b30c-eee8089381fb","Type":"ContainerStarted","Data":"74ddeef52a1988bb0af5702429d26966e29a63479bee8f408697a97ff317d82e"} Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.176696 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdd9b889b-qd9cm" event={"ID":"6134ed19-8856-4c53-b30c-eee8089381fb","Type":"ContainerStarted","Data":"dbdfd66f5e0b9f27b2a6492f0bc543fb198392bab6152533bc7acf27a2e44bda"} Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.177033 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.218391 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.220860 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7v5x\" (UniqueName: \"kubernetes.io/projected/50131779-f495-424b-85b7-24da5b37882d-kube-api-access-r7v5x\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.220933 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.220967 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.220989 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-scripts\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.221011 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.221067 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.221114 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50131779-f495-424b-85b7-24da5b37882d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.221133 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.221173 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.223448 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-config\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.223506 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hnf5\" (UniqueName: \"kubernetes.io/projected/e027da10-b05a-4f28-b1d0-763534dcef95-kube-api-access-7hnf5\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.223582 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.231869 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="ceilometer-central-agent" containerID="cri-o://2b8525e5c754ff6b74baf460e2f9c2963e0849a2646e751cccef5f52c23320a0" gracePeriod=30 Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.232120 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerStarted","Data":"4acd92b67c7155f2acbad3cd3eea7fec1231e175be0cbd721852fc8362413e05"} Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.232158 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.232470 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="proxy-httpd" containerID="cri-o://4acd92b67c7155f2acbad3cd3eea7fec1231e175be0cbd721852fc8362413e05" gracePeriod=30 Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.232521 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="sg-core" containerID="cri-o://3c000c0abd243391ae984b2aa4be6488d9123e4845e04e9101947b96441d1e6e" gracePeriod=30 Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.232554 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="ceilometer-notification-agent" containerID="cri-o://5bfe9161b6bb9d78efc19dae4b67f7da28e00b68c91c562a44e273b9ce05685f" gracePeriod=30 Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.247787 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d82rc"] Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.257244 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" event={"ID":"85d9ee6a-3545-4095-8905-ca17f8690bca","Type":"ContainerStarted","Data":"553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991"} Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.258018 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.264572 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-769d54b998-47sgr" event={"ID":"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511","Type":"ContainerStarted","Data":"58c2271b56e4dc6dbedd7f61419e9dfe3997ff7506a0c8910ad5ab727669a5f5"} Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.321727 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.553740495 podStartE2EDuration="54.321707792s" podCreationTimestamp="2026-02-23 13:26:43 +0000 UTC" firstStartedPulling="2026-02-23 13:26:45.038625517 +0000 UTC m=+1159.720329195" lastFinishedPulling="2026-02-23 13:27:35.806592814 +0000 UTC m=+1210.488296492" observedRunningTime="2026-02-23 13:27:37.278507681 +0000 UTC m=+1211.960211369" watchObservedRunningTime="2026-02-23 13:27:37.321707792 +0000 UTC m=+1212.003411470" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.322098 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" podStartSLOduration=7.322093913 podStartE2EDuration="7.322093913s" podCreationTimestamp="2026-02-23 13:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:37.306614425 +0000 UTC m=+1211.988318123" watchObservedRunningTime="2026-02-23 13:27:37.322093913 +0000 UTC m=+1212.003797591" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.325812 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.325950 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-scripts\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.327579 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.327621 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.327827 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50131779-f495-424b-85b7-24da5b37882d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.327858 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.327860 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.327881 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.328179 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-config\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.328241 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hnf5\" (UniqueName: \"kubernetes.io/projected/e027da10-b05a-4f28-b1d0-763534dcef95-kube-api-access-7hnf5\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.328308 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.328404 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7v5x\" (UniqueName: \"kubernetes.io/projected/50131779-f495-424b-85b7-24da5b37882d-kube-api-access-r7v5x\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.328451 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.329511 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.329977 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50131779-f495-424b-85b7-24da5b37882d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.330069 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.330569 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-scripts\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.333748 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.335026 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-config\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.337363 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.337402 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.350250 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.357151 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7v5x\" (UniqueName: \"kubernetes.io/projected/50131779-f495-424b-85b7-24da5b37882d-kube-api-access-r7v5x\") pod \"cinder-scheduler-0\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.358846 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hnf5\" (UniqueName: \"kubernetes.io/projected/e027da10-b05a-4f28-b1d0-763534dcef95-kube-api-access-7hnf5\") pod \"dnsmasq-dns-5c9776ccc5-d82rc\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.359065 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.360682 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.369014 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.369675 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.429939 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data-custom\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.429984 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-logs\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.430038 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.430068 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.430108 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.430181 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hks4\" (UniqueName: \"kubernetes.io/projected/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-kube-api-access-2hks4\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.430250 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-scripts\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.509170 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.532236 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hks4\" (UniqueName: \"kubernetes.io/projected/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-kube-api-access-2hks4\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.532451 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-scripts\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.532487 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data-custom\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.532519 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-logs\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.532561 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.532600 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.532641 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.532911 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.533717 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-logs\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.537438 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-scripts\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.537919 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.538231 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data-custom\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.538528 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.551736 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hks4\" (UniqueName: \"kubernetes.io/projected/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-kube-api-access-2hks4\") pod \"cinder-api-0\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " pod="openstack/cinder-api-0" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.555958 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:37 crc kubenswrapper[4851]: I0223 13:27:37.627814 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.038917 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.190891 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d82rc"] Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.274765 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" event={"ID":"42f43676-ccd3-45e3-b729-ab33430aca9a","Type":"ContainerStarted","Data":"5638b514984f073a2e84a6e5210dd7e2c9f805249ff3eb8da5364edf2024cbf8"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.274821 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" event={"ID":"42f43676-ccd3-45e3-b729-ab33430aca9a","Type":"ContainerStarted","Data":"bc173ab26aae568bf700993bec8d3c73bf9ac6331accdf6dde2351aaf792d524"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.277048 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" event={"ID":"e027da10-b05a-4f28-b1d0-763534dcef95","Type":"ContainerStarted","Data":"141a132630d995b318ee9857ad9a6da8e9296aa332db95a893a1b8dbdb01c618"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.280515 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bdd9b889b-qd9cm" event={"ID":"6134ed19-8856-4c53-b30c-eee8089381fb","Type":"ContainerStarted","Data":"b739d503ed07b01a3a799aab0b2b7854b8859585c9e1196c68c692a15453752f"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.280756 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.280839 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.284189 4851 generic.go:334] "Generic (PLEG): container finished" podID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerID="4acd92b67c7155f2acbad3cd3eea7fec1231e175be0cbd721852fc8362413e05" exitCode=0 Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.284220 4851 generic.go:334] "Generic (PLEG): container finished" podID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerID="3c000c0abd243391ae984b2aa4be6488d9123e4845e04e9101947b96441d1e6e" exitCode=2 Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.284229 4851 generic.go:334] "Generic (PLEG): container finished" podID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerID="2b8525e5c754ff6b74baf460e2f9c2963e0849a2646e751cccef5f52c23320a0" exitCode=0 Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.284274 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerDied","Data":"4acd92b67c7155f2acbad3cd3eea7fec1231e175be0cbd721852fc8362413e05"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.284295 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerDied","Data":"3c000c0abd243391ae984b2aa4be6488d9123e4845e04e9101947b96441d1e6e"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.284309 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerDied","Data":"2b8525e5c754ff6b74baf460e2f9c2963e0849a2646e751cccef5f52c23320a0"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.285942 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50131779-f495-424b-85b7-24da5b37882d","Type":"ContainerStarted","Data":"a6b0222b24c9b1926ff5b26e420d428f7d1991a106cfd2d84d9cc519c2346ba9"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.292018 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7546f4466c-vlsxg" event={"ID":"fc271fbe-58c9-4eca-adfd-63ff51aa46fa","Type":"ContainerStarted","Data":"7993668aaf0839cb688b83da7c63856119f090b3939c24d55aa7a70e92558d62"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.292053 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7546f4466c-vlsxg" event={"ID":"fc271fbe-58c9-4eca-adfd-63ff51aa46fa","Type":"ContainerStarted","Data":"b9ddb49ddde5ce589ee1eba0604ec08254cc1431c246906168730ddf784f75bd"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.303067 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-769d54b998-47sgr" event={"ID":"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511","Type":"ContainerStarted","Data":"64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.303107 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" podUID="85d9ee6a-3545-4095-8905-ca17f8690bca" containerName="dnsmasq-dns" containerID="cri-o://553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991" gracePeriod=10 Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.303115 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-769d54b998-47sgr" event={"ID":"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511","Type":"ContainerStarted","Data":"099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d"} Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.305152 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-66757bd65d-pn2zh" podStartSLOduration=6.9041118 podStartE2EDuration="8.305137534s" podCreationTimestamp="2026-02-23 13:27:30 +0000 UTC" firstStartedPulling="2026-02-23 13:27:35.294476021 +0000 UTC m=+1209.976179699" lastFinishedPulling="2026-02-23 13:27:36.695501755 +0000 UTC m=+1211.377205433" observedRunningTime="2026-02-23 13:27:38.293507955 +0000 UTC m=+1212.975211653" watchObservedRunningTime="2026-02-23 13:27:38.305137534 +0000 UTC m=+1212.986841222" Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.326881 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bdd9b889b-qd9cm" podStartSLOduration=6.326854658 podStartE2EDuration="6.326854658s" podCreationTimestamp="2026-02-23 13:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:38.317093462 +0000 UTC m=+1212.998797170" watchObservedRunningTime="2026-02-23 13:27:38.326854658 +0000 UTC m=+1213.008558346" Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.357019 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7546f4466c-vlsxg" podStartSLOduration=7.115215036 podStartE2EDuration="8.35699753s" podCreationTimestamp="2026-02-23 13:27:30 +0000 UTC" firstStartedPulling="2026-02-23 13:27:35.712947217 +0000 UTC m=+1210.394650895" lastFinishedPulling="2026-02-23 13:27:36.954729711 +0000 UTC m=+1211.636433389" observedRunningTime="2026-02-23 13:27:38.334774172 +0000 UTC m=+1213.016477850" watchObservedRunningTime="2026-02-23 13:27:38.35699753 +0000 UTC m=+1213.038701208" Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.382958 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.383219 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-769d54b998-47sgr" podStartSLOduration=8.38319932 podStartE2EDuration="8.38319932s" podCreationTimestamp="2026-02-23 13:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:38.369240736 +0000 UTC m=+1213.050944424" watchObservedRunningTime="2026-02-23 13:27:38.38319932 +0000 UTC m=+1213.064902998" Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.920580 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.981562 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-sb\") pod \"85d9ee6a-3545-4095-8905-ca17f8690bca\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.981628 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-svc\") pod \"85d9ee6a-3545-4095-8905-ca17f8690bca\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.981661 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-config\") pod \"85d9ee6a-3545-4095-8905-ca17f8690bca\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.981740 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-nb\") pod \"85d9ee6a-3545-4095-8905-ca17f8690bca\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.981966 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6t9c\" (UniqueName: \"kubernetes.io/projected/85d9ee6a-3545-4095-8905-ca17f8690bca-kube-api-access-r6t9c\") pod \"85d9ee6a-3545-4095-8905-ca17f8690bca\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.982106 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-swift-storage-0\") pod \"85d9ee6a-3545-4095-8905-ca17f8690bca\" (UID: \"85d9ee6a-3545-4095-8905-ca17f8690bca\") " Feb 23 13:27:38 crc kubenswrapper[4851]: I0223 13:27:38.992781 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d9ee6a-3545-4095-8905-ca17f8690bca-kube-api-access-r6t9c" (OuterVolumeSpecName: "kube-api-access-r6t9c") pod "85d9ee6a-3545-4095-8905-ca17f8690bca" (UID: "85d9ee6a-3545-4095-8905-ca17f8690bca"). InnerVolumeSpecName "kube-api-access-r6t9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.084946 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6t9c\" (UniqueName: \"kubernetes.io/projected/85d9ee6a-3545-4095-8905-ca17f8690bca-kube-api-access-r6t9c\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.103293 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "85d9ee6a-3545-4095-8905-ca17f8690bca" (UID: "85d9ee6a-3545-4095-8905-ca17f8690bca"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.103390 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-config" (OuterVolumeSpecName: "config") pod "85d9ee6a-3545-4095-8905-ca17f8690bca" (UID: "85d9ee6a-3545-4095-8905-ca17f8690bca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.104222 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "85d9ee6a-3545-4095-8905-ca17f8690bca" (UID: "85d9ee6a-3545-4095-8905-ca17f8690bca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.135501 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "85d9ee6a-3545-4095-8905-ca17f8690bca" (UID: "85d9ee6a-3545-4095-8905-ca17f8690bca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.142834 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85d9ee6a-3545-4095-8905-ca17f8690bca" (UID: "85d9ee6a-3545-4095-8905-ca17f8690bca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.188765 4851 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.188813 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.188828 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.188841 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.188852 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85d9ee6a-3545-4095-8905-ca17f8690bca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.319778 4851 generic.go:334] "Generic (PLEG): container finished" podID="e027da10-b05a-4f28-b1d0-763534dcef95" containerID="0893328bb76c5d71e956f01f489bd8fa89d4317e01b224a63b39fe9a34c44d8b" exitCode=0 Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.319944 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" event={"ID":"e027da10-b05a-4f28-b1d0-763534dcef95","Type":"ContainerDied","Data":"0893328bb76c5d71e956f01f489bd8fa89d4317e01b224a63b39fe9a34c44d8b"} Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.325754 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d","Type":"ContainerStarted","Data":"4bbc94f0143228e5c12e3d8ddd719749d7bd8f1f9989075ccaa9c34f276c38aa"} Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.342155 4851 generic.go:334] "Generic (PLEG): container finished" podID="85d9ee6a-3545-4095-8905-ca17f8690bca" containerID="553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991" exitCode=0 Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.343289 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.344642 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" event={"ID":"85d9ee6a-3545-4095-8905-ca17f8690bca","Type":"ContainerDied","Data":"553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991"} Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.344702 4851 scope.go:117] "RemoveContainer" containerID="553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.348506 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.348550 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.348563 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-s6f92" event={"ID":"85d9ee6a-3545-4095-8905-ca17f8690bca","Type":"ContainerDied","Data":"36c810410feea633c26936671411bbedb69953c6fa3f827ae7be579b32a634b6"} Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.491186 4851 scope.go:117] "RemoveContainer" containerID="d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.498666 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s6f92"] Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.507050 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-s6f92"] Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.596088 4851 scope.go:117] "RemoveContainer" containerID="553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991" Feb 23 13:27:39 crc kubenswrapper[4851]: E0223 13:27:39.599465 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991\": container with ID starting with 553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991 not found: ID does not exist" containerID="553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.599511 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991"} err="failed to get container status \"553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991\": rpc error: code = NotFound desc = could not find container \"553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991\": container with ID starting with 553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991 not found: ID does not exist" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.599540 4851 scope.go:117] "RemoveContainer" containerID="d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2" Feb 23 13:27:39 crc kubenswrapper[4851]: E0223 13:27:39.599960 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2\": container with ID starting with d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2 not found: ID does not exist" containerID="d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.600008 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2"} err="failed to get container status \"d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2\": rpc error: code = NotFound desc = could not find container \"d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2\": container with ID starting with d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2 not found: ID does not exist" Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.723837 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 13:27:39 crc kubenswrapper[4851]: I0223 13:27:39.990040 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d9ee6a-3545-4095-8905-ca17f8690bca" path="/var/lib/kubelet/pods/85d9ee6a-3545-4095-8905-ca17f8690bca/volumes" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.364661 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50131779-f495-424b-85b7-24da5b37882d","Type":"ContainerStarted","Data":"34f95234aa4798154e16e4d120ba82ae40979c014710eeb1d7fde036bce50e98"} Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.367165 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d","Type":"ContainerStarted","Data":"0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443"} Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.367226 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d","Type":"ContainerStarted","Data":"47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3"} Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.367393 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerName="cinder-api-log" containerID="cri-o://47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3" gracePeriod=30 Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.367487 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.367870 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerName="cinder-api" containerID="cri-o://0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443" gracePeriod=30 Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.383599 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" event={"ID":"e027da10-b05a-4f28-b1d0-763534dcef95","Type":"ContainerStarted","Data":"f0ad73ef9bc207ec19ccd223f0a31bbd576991976f71e9c6a3753fe48f19b9b2"} Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.383869 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.420128 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" podStartSLOduration=3.420105383 podStartE2EDuration="3.420105383s" podCreationTimestamp="2026-02-23 13:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:40.418089826 +0000 UTC m=+1215.099793514" watchObservedRunningTime="2026-02-23 13:27:40.420105383 +0000 UTC m=+1215.101809061" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.421232 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.421225485 podStartE2EDuration="3.421225485s" podCreationTimestamp="2026-02-23 13:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:40.39700997 +0000 UTC m=+1215.078713668" watchObservedRunningTime="2026-02-23 13:27:40.421225485 +0000 UTC m=+1215.102929163" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.454446 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.718771 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cf9f55d6f-4t6cw"] Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.719010 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cf9f55d6f-4t6cw" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-api" containerID="cri-o://dc004bf07f04d3099dac40e6deb9ba3607ee5f3d136950a7dcb698f80b934aac" gracePeriod=30 Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.719101 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cf9f55d6f-4t6cw" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-httpd" containerID="cri-o://d9c34003fc467639e156191b838c11e289a33a1f0ad1869c9a50667c117a2240" gracePeriod=30 Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.730364 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-cf9f55d6f-4t6cw" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": EOF" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.758963 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bd58878f7-xhsz6"] Feb 23 13:27:40 crc kubenswrapper[4851]: E0223 13:27:40.759453 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d9ee6a-3545-4095-8905-ca17f8690bca" containerName="init" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.759477 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d9ee6a-3545-4095-8905-ca17f8690bca" containerName="init" Feb 23 13:27:40 crc kubenswrapper[4851]: E0223 13:27:40.759496 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d9ee6a-3545-4095-8905-ca17f8690bca" containerName="dnsmasq-dns" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.759505 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d9ee6a-3545-4095-8905-ca17f8690bca" containerName="dnsmasq-dns" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.759705 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d9ee6a-3545-4095-8905-ca17f8690bca" containerName="dnsmasq-dns" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.765192 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.775030 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bd58878f7-xhsz6"] Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.821367 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-config\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.821470 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-httpd-config\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.821503 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgvk\" (UniqueName: \"kubernetes.io/projected/293a3d32-d143-4600-bb4e-50f2c5783f67-kube-api-access-vkgvk\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.821582 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-internal-tls-certs\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.821660 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-public-tls-certs\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.821700 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-combined-ca-bundle\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.821767 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-ovndb-tls-certs\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.923442 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-ovndb-tls-certs\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.923552 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-config\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.923579 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-httpd-config\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.924258 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgvk\" (UniqueName: \"kubernetes.io/projected/293a3d32-d143-4600-bb4e-50f2c5783f67-kube-api-access-vkgvk\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.924299 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-internal-tls-certs\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.924353 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-public-tls-certs\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.924391 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-combined-ca-bundle\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.932874 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-httpd-config\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.932922 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-internal-tls-certs\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.932991 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-config\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.933436 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-ovndb-tls-certs\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.933881 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-public-tls-certs\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.935238 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/293a3d32-d143-4600-bb4e-50f2c5783f67-combined-ca-bundle\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:40 crc kubenswrapper[4851]: I0223 13:27:40.949703 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgvk\" (UniqueName: \"kubernetes.io/projected/293a3d32-d143-4600-bb4e-50f2c5783f67-kube-api-access-vkgvk\") pod \"neutron-7bd58878f7-xhsz6\" (UID: \"293a3d32-d143-4600-bb4e-50f2c5783f67\") " pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.090921 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.375778 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.412588 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50131779-f495-424b-85b7-24da5b37882d","Type":"ContainerStarted","Data":"4b028227f0db97f49850a8fe257dad24818dbd0abbeff19e9328bd2f85c6700f"} Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.424692 4851 generic.go:334] "Generic (PLEG): container finished" podID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerID="0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443" exitCode=0 Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.424729 4851 generic.go:334] "Generic (PLEG): container finished" podID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerID="47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3" exitCode=143 Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.424789 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d","Type":"ContainerDied","Data":"0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443"} Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.424818 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d","Type":"ContainerDied","Data":"47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3"} Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.424830 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d","Type":"ContainerDied","Data":"4bbc94f0143228e5c12e3d8ddd719749d7bd8f1f9989075ccaa9c34f276c38aa"} Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.424853 4851 scope.go:117] "RemoveContainer" containerID="0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.425027 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.437100 4851 generic.go:334] "Generic (PLEG): container finished" podID="275f852c-2061-4175-bc10-0b502e44e587" containerID="d9c34003fc467639e156191b838c11e289a33a1f0ad1869c9a50667c117a2240" exitCode=0 Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.437630 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf9f55d6f-4t6cw" event={"ID":"275f852c-2061-4175-bc10-0b502e44e587","Type":"ContainerDied","Data":"d9c34003fc467639e156191b838c11e289a33a1f0ad1869c9a50667c117a2240"} Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.446650 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.606188712 podStartE2EDuration="4.446628873s" podCreationTimestamp="2026-02-23 13:27:37 +0000 UTC" firstStartedPulling="2026-02-23 13:27:38.063503505 +0000 UTC m=+1212.745207183" lastFinishedPulling="2026-02-23 13:27:38.903943666 +0000 UTC m=+1213.585647344" observedRunningTime="2026-02-23 13:27:41.43022601 +0000 UTC m=+1216.111929698" watchObservedRunningTime="2026-02-23 13:27:41.446628873 +0000 UTC m=+1216.128332551" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.452197 4851 scope.go:117] "RemoveContainer" containerID="47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.478349 4851 scope.go:117] "RemoveContainer" containerID="0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443" Feb 23 13:27:41 crc kubenswrapper[4851]: E0223 13:27:41.478749 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443\": container with ID starting with 0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443 not found: ID does not exist" containerID="0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.478780 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443"} err="failed to get container status \"0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443\": rpc error: code = NotFound desc = could not find container \"0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443\": container with ID starting with 0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443 not found: ID does not exist" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.478800 4851 scope.go:117] "RemoveContainer" containerID="47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3" Feb 23 13:27:41 crc kubenswrapper[4851]: E0223 13:27:41.479207 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3\": container with ID starting with 47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3 not found: ID does not exist" containerID="47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.479248 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3"} err="failed to get container status \"47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3\": rpc error: code = NotFound desc = could not find container \"47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3\": container with ID starting with 47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3 not found: ID does not exist" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.479273 4851 scope.go:117] "RemoveContainer" containerID="0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.481699 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443"} err="failed to get container status \"0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443\": rpc error: code = NotFound desc = could not find container \"0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443\": container with ID starting with 0bf3c7f7fb532c3f98a4bde69484ebb32e3df0565c5798321872ff5e18de0443 not found: ID does not exist" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.481729 4851 scope.go:117] "RemoveContainer" containerID="47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.482063 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3"} err="failed to get container status \"47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3\": rpc error: code = NotFound desc = could not find container \"47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3\": container with ID starting with 47b6ad1d03ce87392e267aa8493b053ca82376beafa1aafd3716cd74a0d1f5a3 not found: ID does not exist" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.537105 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hks4\" (UniqueName: \"kubernetes.io/projected/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-kube-api-access-2hks4\") pod \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.537202 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data-custom\") pod \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.537280 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data\") pod \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.537400 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-etc-machine-id\") pod \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.537448 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-combined-ca-bundle\") pod \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.537481 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-scripts\") pod \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.537581 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-logs\") pod \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\" (UID: \"012d51b3-e83c-4ed8-bb9b-91ac7c8a217d\") " Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.538972 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-logs" (OuterVolumeSpecName: "logs") pod "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" (UID: "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.539014 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" (UID: "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.541856 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-kube-api-access-2hks4" (OuterVolumeSpecName: "kube-api-access-2hks4") pod "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" (UID: "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d"). InnerVolumeSpecName "kube-api-access-2hks4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.563526 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-scripts" (OuterVolumeSpecName: "scripts") pod "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" (UID: "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.563594 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" (UID: "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.585191 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" (UID: "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:41 crc kubenswrapper[4851]: I0223 13:27:41.600706 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data" (OuterVolumeSpecName: "config-data") pod "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" (UID: "012d51b3-e83c-4ed8-bb9b-91ac7c8a217d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.640093 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hks4\" (UniqueName: \"kubernetes.io/projected/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-kube-api-access-2hks4\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.640461 4851 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.640477 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.640491 4851 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.640505 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.640516 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.640527 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.694524 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bd58878f7-xhsz6"] Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.770625 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.788747 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.819573 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 13:27:42 crc kubenswrapper[4851]: E0223 13:27:41.820047 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerName="cinder-api-log" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.820064 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerName="cinder-api-log" Feb 23 13:27:42 crc kubenswrapper[4851]: E0223 13:27:41.820083 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerName="cinder-api" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.820091 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerName="cinder-api" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.820985 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerName="cinder-api" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.821009 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" containerName="cinder-api-log" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.822239 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.825741 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.825996 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.826341 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.838641 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.848823 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c260317a-0cb6-475e-b780-50f6de86dda2-logs\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.848911 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.848947 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-config-data-custom\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.849010 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-scripts\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.849052 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-config-data\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.849077 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.849241 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c260317a-0cb6-475e-b780-50f6de86dda2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.849274 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.849294 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmtj\" (UniqueName: \"kubernetes.io/projected/c260317a-0cb6-475e-b780-50f6de86dda2-kube-api-access-qjmtj\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.955024 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c260317a-0cb6-475e-b780-50f6de86dda2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.955078 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.955107 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmtj\" (UniqueName: \"kubernetes.io/projected/c260317a-0cb6-475e-b780-50f6de86dda2-kube-api-access-qjmtj\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.955148 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c260317a-0cb6-475e-b780-50f6de86dda2-logs\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.955207 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.955237 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-config-data-custom\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.955289 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-scripts\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.955347 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-config-data\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.955375 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.956623 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c260317a-0cb6-475e-b780-50f6de86dda2-logs\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.956704 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c260317a-0cb6-475e-b780-50f6de86dda2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.961575 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.961799 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.962400 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-config-data-custom\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.963919 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-scripts\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.964216 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.965383 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c260317a-0cb6-475e-b780-50f6de86dda2-config-data\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.978000 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmtj\" (UniqueName: \"kubernetes.io/projected/c260317a-0cb6-475e-b780-50f6de86dda2-kube-api-access-qjmtj\") pod \"cinder-api-0\" (UID: \"c260317a-0cb6-475e-b780-50f6de86dda2\") " pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:41.987465 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012d51b3-e83c-4ed8-bb9b-91ac7c8a217d" path="/var/lib/kubelet/pods/012d51b3-e83c-4ed8-bb9b-91ac7c8a217d/volumes" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.142619 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.465126 4851 generic.go:334] "Generic (PLEG): container finished" podID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerID="5bfe9161b6bb9d78efc19dae4b67f7da28e00b68c91c562a44e273b9ce05685f" exitCode=0 Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.465477 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerDied","Data":"5bfe9161b6bb9d78efc19dae4b67f7da28e00b68c91c562a44e273b9ce05685f"} Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.470956 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bd58878f7-xhsz6" event={"ID":"293a3d32-d143-4600-bb4e-50f2c5783f67","Type":"ContainerStarted","Data":"3c1eb856eca3d1d3990f906c2279eb5e85655d0f4175fcf80a687a615cb7f98d"} Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.470986 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bd58878f7-xhsz6" event={"ID":"293a3d32-d143-4600-bb4e-50f2c5783f67","Type":"ContainerStarted","Data":"29a497d2f9abac46ddcaedc352139ec1ae3aec8821003f94d433b395553eb454"} Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.470997 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bd58878f7-xhsz6" event={"ID":"293a3d32-d143-4600-bb4e-50f2c5783f67","Type":"ContainerStarted","Data":"20610c9688722c45facbde2401959d063d4513285b623465da5100e00fff0c08"} Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.471206 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.509264 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.509668 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bd58878f7-xhsz6" podStartSLOduration=2.509658274 podStartE2EDuration="2.509658274s" podCreationTimestamp="2026-02-23 13:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:42.49924417 +0000 UTC m=+1217.180947868" watchObservedRunningTime="2026-02-23 13:27:42.509658274 +0000 UTC m=+1217.191361952" Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.616106 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-cf9f55d6f-4t6cw" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Feb 23 13:27:42 crc kubenswrapper[4851]: W0223 13:27:42.724678 4851 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice/crio-conmon-d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice/crio-conmon-d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2.scope: no such file or directory Feb 23 13:27:42 crc kubenswrapper[4851]: W0223 13:27:42.724719 4851 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice/crio-d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice/crio-d806334cf493e12e9302b63c546312dce905520c4f4271c770239c4a8ae916f2.scope: no such file or directory Feb 23 13:27:42 crc kubenswrapper[4851]: W0223 13:27:42.724744 4851 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8428e9e1_8d6b_4efc_943d_7c9cc6b05cea.slice/crio-conmon-4acd92b67c7155f2acbad3cd3eea7fec1231e175be0cbd721852fc8362413e05.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8428e9e1_8d6b_4efc_943d_7c9cc6b05cea.slice/crio-conmon-4acd92b67c7155f2acbad3cd3eea7fec1231e175be0cbd721852fc8362413e05.scope: no such file or directory Feb 23 13:27:42 crc kubenswrapper[4851]: W0223 13:27:42.724763 4851 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8428e9e1_8d6b_4efc_943d_7c9cc6b05cea.slice/crio-4acd92b67c7155f2acbad3cd3eea7fec1231e175be0cbd721852fc8362413e05.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8428e9e1_8d6b_4efc_943d_7c9cc6b05cea.slice/crio-4acd92b67c7155f2acbad3cd3eea7fec1231e175be0cbd721852fc8362413e05.scope: no such file or directory Feb 23 13:27:42 crc kubenswrapper[4851]: W0223 13:27:42.735057 4851 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice/crio-conmon-553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice/crio-conmon-553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991.scope: no such file or directory Feb 23 13:27:42 crc kubenswrapper[4851]: W0223 13:27:42.746301 4851 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice/crio-553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice/crio-553770e075a9899f07e448a59dcbb686389353006a3796b05650e9e570852991.scope: no such file or directory Feb 23 13:27:42 crc kubenswrapper[4851]: W0223 13:27:42.841206 4851 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod012d51b3_e83c_4ed8_bb9b_91ac7c8a217d.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod012d51b3_e83c_4ed8_bb9b_91ac7c8a217d.slice: no such file or directory Feb 23 13:27:42 crc kubenswrapper[4851]: I0223 13:27:42.941385 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.036543 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.057257 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode26fedd4_a6b4_4bbb_ad58_e64e964f1f4c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode26fedd4_a6b4_4bbb_ad58_e64e964f1f4c.slice/crio-200467c5dd0830690f8ac27bc980e33a004e3e355d1c6eb65a3a433d167ed0c0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85d9ee6a_3545_4095_8905_ca17f8690bca.slice/crio-36c810410feea633c26936671411bbedb69953c6fa3f827ae7be579b32a634b6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8428e9e1_8d6b_4efc_943d_7c9cc6b05cea.slice/crio-conmon-3c000c0abd243391ae984b2aa4be6488d9123e4845e04e9101947b96441d1e6e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod275f852c_2061_4175_bc10_0b502e44e587.slice/crio-d9c34003fc467639e156191b838c11e289a33a1f0ad1869c9a50667c117a2240.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod275f852c_2061_4175_bc10_0b502e44e587.slice/crio-conmon-d9c34003fc467639e156191b838c11e289a33a1f0ad1869c9a50667c117a2240.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8428e9e1_8d6b_4efc_943d_7c9cc6b05cea.slice/crio-2b8525e5c754ff6b74baf460e2f9c2963e0849a2646e751cccef5f52c23320a0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa0fe9c5_dec4_4c19_b7c7_d9878be435a4.slice/crio-1785af1a62c4b7ac03828a6889361660d23027e9ddd0a9cb48f611a2396a75fa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa0fe9c5_dec4_4c19_b7c7_d9878be435a4.slice/crio-e632646867e97abfdbabbccc2035a2cbda5f741d333fae182aa2d58448ee3dd4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81279875_8a74_47bf_900a_dcf56249c95b.slice/crio-conmon-e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa0fe9c5_dec4_4c19_b7c7_d9878be435a4.slice/crio-conmon-e632646867e97abfdbabbccc2035a2cbda5f741d333fae182aa2d58448ee3dd4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8428e9e1_8d6b_4efc_943d_7c9cc6b05cea.slice/crio-conmon-2b8525e5c754ff6b74baf460e2f9c2963e0849a2646e751cccef5f52c23320a0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81279875_8a74_47bf_900a_dcf56249c95b.slice/crio-e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.192230 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-log-httpd\") pod \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.192324 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j59zc\" (UniqueName: \"kubernetes.io/projected/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-kube-api-access-j59zc\") pod \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.192431 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-scripts\") pod \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.192454 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-config-data\") pod \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.192531 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-sg-core-conf-yaml\") pod \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.192585 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-run-httpd\") pod \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.192663 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-combined-ca-bundle\") pod \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\" (UID: \"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.192817 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" (UID: "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.193045 4851 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.193973 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" (UID: "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.202533 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-scripts" (OuterVolumeSpecName: "scripts") pod "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" (UID: "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.237512 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-kube-api-access-j59zc" (OuterVolumeSpecName: "kube-api-access-j59zc") pod "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" (UID: "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea"). InnerVolumeSpecName "kube-api-access-j59zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.246643 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.295518 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.295573 4851 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.295586 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j59zc\" (UniqueName: \"kubernetes.io/projected/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-kube-api-access-j59zc\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.297528 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" (UID: "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.341833 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" (UID: "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.398140 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-scripts\") pod \"81279875-8a74-47bf-900a-dcf56249c95b\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.398217 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm76j\" (UniqueName: \"kubernetes.io/projected/81279875-8a74-47bf-900a-dcf56249c95b-kube-api-access-wm76j\") pod \"81279875-8a74-47bf-900a-dcf56249c95b\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.398245 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-config-data\") pod \"81279875-8a74-47bf-900a-dcf56249c95b\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.398347 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81279875-8a74-47bf-900a-dcf56249c95b-logs\") pod \"81279875-8a74-47bf-900a-dcf56249c95b\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.398378 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81279875-8a74-47bf-900a-dcf56249c95b-horizon-secret-key\") pod \"81279875-8a74-47bf-900a-dcf56249c95b\" (UID: \"81279875-8a74-47bf-900a-dcf56249c95b\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.398722 4851 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.398734 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.410620 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81279875-8a74-47bf-900a-dcf56249c95b-kube-api-access-wm76j" (OuterVolumeSpecName: "kube-api-access-wm76j") pod "81279875-8a74-47bf-900a-dcf56249c95b" (UID: "81279875-8a74-47bf-900a-dcf56249c95b"). InnerVolumeSpecName "kube-api-access-wm76j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.415849 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81279875-8a74-47bf-900a-dcf56249c95b-logs" (OuterVolumeSpecName: "logs") pod "81279875-8a74-47bf-900a-dcf56249c95b" (UID: "81279875-8a74-47bf-900a-dcf56249c95b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.416524 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81279875-8a74-47bf-900a-dcf56249c95b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "81279875-8a74-47bf-900a-dcf56249c95b" (UID: "81279875-8a74-47bf-900a-dcf56249c95b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.447239 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-scripts" (OuterVolumeSpecName: "scripts") pod "81279875-8a74-47bf-900a-dcf56249c95b" (UID: "81279875-8a74-47bf-900a-dcf56249c95b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.487828 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-config-data" (OuterVolumeSpecName: "config-data") pod "81279875-8a74-47bf-900a-dcf56249c95b" (UID: "81279875-8a74-47bf-900a-dcf56249c95b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.490553 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-config-data" (OuterVolumeSpecName: "config-data") pod "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" (UID: "8428e9e1-8d6b-4efc-943d-7c9cc6b05cea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.500629 4851 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/81279875-8a74-47bf-900a-dcf56249c95b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.500681 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.500696 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.500711 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm76j\" (UniqueName: \"kubernetes.io/projected/81279875-8a74-47bf-900a-dcf56249c95b-kube-api-access-wm76j\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.500724 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81279875-8a74-47bf-900a-dcf56249c95b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.500735 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81279875-8a74-47bf-900a-dcf56249c95b-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.521523 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c260317a-0cb6-475e-b780-50f6de86dda2","Type":"ContainerStarted","Data":"a1ccd0916a0190537e2812e083120ff33d81f034bb2ed34197246691e53d5790"} Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.542574 4851 generic.go:334] "Generic (PLEG): container finished" podID="81279875-8a74-47bf-900a-dcf56249c95b" containerID="3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2" exitCode=137 Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.542609 4851 generic.go:334] "Generic (PLEG): container finished" podID="81279875-8a74-47bf-900a-dcf56249c95b" containerID="e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8" exitCode=137 Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.542672 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5554597f7c-7r294" event={"ID":"81279875-8a74-47bf-900a-dcf56249c95b","Type":"ContainerDied","Data":"3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2"} Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.542701 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5554597f7c-7r294" event={"ID":"81279875-8a74-47bf-900a-dcf56249c95b","Type":"ContainerDied","Data":"e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8"} Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.542711 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5554597f7c-7r294" event={"ID":"81279875-8a74-47bf-900a-dcf56249c95b","Type":"ContainerDied","Data":"a0dc766ce21a89d4b67d394e99458857ceccecf191e6c4bb24e45e3a54dd49aa"} Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.542727 4851 scope.go:117] "RemoveContainer" containerID="3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.542894 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5554597f7c-7r294" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.585931 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8428e9e1-8d6b-4efc-943d-7c9cc6b05cea","Type":"ContainerDied","Data":"20fdeb0c21c84c9ccb37c20cb5d82bb2cf8ab5bd210bf1302d210369879bce92"} Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.586122 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.594374 4851 generic.go:334] "Generic (PLEG): container finished" podID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerID="1785af1a62c4b7ac03828a6889361660d23027e9ddd0a9cb48f611a2396a75fa" exitCode=137 Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.594401 4851 generic.go:334] "Generic (PLEG): container finished" podID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerID="e632646867e97abfdbabbccc2035a2cbda5f741d333fae182aa2d58448ee3dd4" exitCode=137 Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.594424 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-785dd4679c-lrw27" event={"ID":"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4","Type":"ContainerDied","Data":"1785af1a62c4b7ac03828a6889361660d23027e9ddd0a9cb48f611a2396a75fa"} Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.594487 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-785dd4679c-lrw27" event={"ID":"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4","Type":"ContainerDied","Data":"e632646867e97abfdbabbccc2035a2cbda5f741d333fae182aa2d58448ee3dd4"} Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.594502 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-785dd4679c-lrw27" event={"ID":"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4","Type":"ContainerDied","Data":"4d62fd4c580b9177b3ae79b3257c395214b6b986984bda3d6a9849968d3fe266"} Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.594512 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d62fd4c580b9177b3ae79b3257c395214b6b986984bda3d6a9849968d3fe266" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.644791 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.654079 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5554597f7c-7r294"] Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.664389 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5554597f7c-7r294"] Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.672406 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.686402 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.732370 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.732762 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerName="horizon-log" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.732773 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerName="horizon-log" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.732789 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="ceilometer-notification-agent" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.732794 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="ceilometer-notification-agent" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.732802 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81279875-8a74-47bf-900a-dcf56249c95b" containerName="horizon" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.732809 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="81279875-8a74-47bf-900a-dcf56249c95b" containerName="horizon" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.732831 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="ceilometer-central-agent" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.732837 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="ceilometer-central-agent" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.732849 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="proxy-httpd" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.732855 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="proxy-httpd" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.732866 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="sg-core" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.732872 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="sg-core" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.732879 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81279875-8a74-47bf-900a-dcf56249c95b" containerName="horizon-log" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.732885 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="81279875-8a74-47bf-900a-dcf56249c95b" containerName="horizon-log" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.732899 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerName="horizon" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.732905 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerName="horizon" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.733099 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="sg-core" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.733110 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="ceilometer-notification-agent" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.733120 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="81279875-8a74-47bf-900a-dcf56249c95b" containerName="horizon-log" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.733133 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerName="horizon-log" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.733142 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="81279875-8a74-47bf-900a-dcf56249c95b" containerName="horizon" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.733152 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="proxy-httpd" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.733162 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" containerName="horizon" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.733177 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" containerName="ceilometer-central-agent" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.734885 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.737691 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.737896 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.767460 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.807931 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-logs\") pod \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.807982 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-horizon-secret-key\") pod \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.808073 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-config-data\") pod \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.808128 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-scripts\") pod \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.808198 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rrnv\" (UniqueName: \"kubernetes.io/projected/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-kube-api-access-9rrnv\") pod \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\" (UID: \"aa0fe9c5-dec4-4c19-b7c7-d9878be435a4\") " Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.826510 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-logs" (OuterVolumeSpecName: "logs") pod "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" (UID: "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.828985 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" (UID: "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.831561 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-kube-api-access-9rrnv" (OuterVolumeSpecName: "kube-api-access-9rrnv") pod "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" (UID: "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4"). InnerVolumeSpecName "kube-api-access-9rrnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.875924 4851 scope.go:117] "RemoveContainer" containerID="e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.890432 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-config-data" (OuterVolumeSpecName: "config-data") pod "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" (UID: "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.910962 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-config-data\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911025 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tcxz\" (UniqueName: \"kubernetes.io/projected/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-kube-api-access-5tcxz\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911073 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-log-httpd\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911162 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-scripts\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911210 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911231 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911258 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-run-httpd\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911397 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911416 4851 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911428 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.911439 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rrnv\" (UniqueName: \"kubernetes.io/projected/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-kube-api-access-9rrnv\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.928824 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-scripts" (OuterVolumeSpecName: "scripts") pod "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" (UID: "aa0fe9c5-dec4-4c19-b7c7-d9878be435a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.950489 4851 scope.go:117] "RemoveContainer" containerID="3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.956508 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2\": container with ID starting with 3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2 not found: ID does not exist" containerID="3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.956769 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2"} err="failed to get container status \"3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2\": rpc error: code = NotFound desc = could not find container \"3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2\": container with ID starting with 3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2 not found: ID does not exist" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.956792 4851 scope.go:117] "RemoveContainer" containerID="e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8" Feb 23 13:27:43 crc kubenswrapper[4851]: E0223 13:27:43.961438 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8\": container with ID starting with e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8 not found: ID does not exist" containerID="e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.961474 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8"} err="failed to get container status \"e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8\": rpc error: code = NotFound desc = could not find container \"e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8\": container with ID starting with e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8 not found: ID does not exist" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.961493 4851 scope.go:117] "RemoveContainer" containerID="3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.963132 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2"} err="failed to get container status \"3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2\": rpc error: code = NotFound desc = could not find container \"3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2\": container with ID starting with 3973d6d343a2d684b502c9d98ad4df4eba05ab87cfba50497cf6e30af31318b2 not found: ID does not exist" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.963178 4851 scope.go:117] "RemoveContainer" containerID="e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.964642 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8"} err="failed to get container status \"e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8\": rpc error: code = NotFound desc = could not find container \"e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8\": container with ID starting with e3c7fabd36c2bf57cef2367f94304ee9686f389632a843656468809597fe98c8 not found: ID does not exist" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.964663 4851 scope.go:117] "RemoveContainer" containerID="4acd92b67c7155f2acbad3cd3eea7fec1231e175be0cbd721852fc8362413e05" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.993795 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81279875-8a74-47bf-900a-dcf56249c95b" path="/var/lib/kubelet/pods/81279875-8a74-47bf-900a-dcf56249c95b/volumes" Feb 23 13:27:43 crc kubenswrapper[4851]: I0223 13:27:43.994525 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8428e9e1-8d6b-4efc-943d-7c9cc6b05cea" path="/var/lib/kubelet/pods/8428e9e1-8d6b-4efc-943d-7c9cc6b05cea/volumes" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.000563 4851 scope.go:117] "RemoveContainer" containerID="3c000c0abd243391ae984b2aa4be6488d9123e4845e04e9101947b96441d1e6e" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.012788 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-config-data\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.012832 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tcxz\" (UniqueName: \"kubernetes.io/projected/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-kube-api-access-5tcxz\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.013652 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-log-httpd\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.013919 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-scripts\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.014016 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.014023 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-log-httpd\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.014043 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.014073 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-run-httpd\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.014166 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.014526 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-run-httpd\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.021035 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.024630 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-scripts\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.024972 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.029614 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-config-data\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.063551 4851 scope.go:117] "RemoveContainer" containerID="5bfe9161b6bb9d78efc19dae4b67f7da28e00b68c91c562a44e273b9ce05685f" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.065570 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tcxz\" (UniqueName: \"kubernetes.io/projected/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-kube-api-access-5tcxz\") pod \"ceilometer-0\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.089756 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.111582 4851 scope.go:117] "RemoveContainer" containerID="2b8525e5c754ff6b74baf460e2f9c2963e0849a2646e751cccef5f52c23320a0" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.604489 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c260317a-0cb6-475e-b780-50f6de86dda2","Type":"ContainerStarted","Data":"b9b8adc382fd26371016754e15485a883e620abb5c8c8637e8dc5d578c49043f"} Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.606914 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-785dd4679c-lrw27" Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.630192 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-785dd4679c-lrw27"] Feb 23 13:27:44 crc kubenswrapper[4851]: W0223 13:27:44.639962 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97bb1048_c2f0_4b91_b07e_5a2fa6d40947.slice/crio-158aefafb498cde1de9ff3649aa3adc61437a33af8aac2f18707d64c1d2e9525 WatchSource:0}: Error finding container 158aefafb498cde1de9ff3649aa3adc61437a33af8aac2f18707d64c1d2e9525: Status 404 returned error can't find the container with id 158aefafb498cde1de9ff3649aa3adc61437a33af8aac2f18707d64c1d2e9525 Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.641055 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-785dd4679c-lrw27"] Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.656607 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:44 crc kubenswrapper[4851]: I0223 13:27:44.724128 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.062682 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.353531 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.399146 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bdd9b889b-qd9cm" Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.484947 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-769d54b998-47sgr"] Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.485247 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-769d54b998-47sgr" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api-log" containerID="cri-o://099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d" gracePeriod=30 Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.485883 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-769d54b998-47sgr" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api" containerID="cri-o://64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a" gracePeriod=30 Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.544660 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-769d54b998-47sgr" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.544995 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-769d54b998-47sgr" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.643959 4851 generic.go:334] "Generic (PLEG): container finished" podID="275f852c-2061-4175-bc10-0b502e44e587" containerID="dc004bf07f04d3099dac40e6deb9ba3607ee5f3d136950a7dcb698f80b934aac" exitCode=0 Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.644310 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf9f55d6f-4t6cw" event={"ID":"275f852c-2061-4175-bc10-0b502e44e587","Type":"ContainerDied","Data":"dc004bf07f04d3099dac40e6deb9ba3607ee5f3d136950a7dcb698f80b934aac"} Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.657237 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerStarted","Data":"158aefafb498cde1de9ff3649aa3adc61437a33af8aac2f18707d64c1d2e9525"} Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.977123 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:45 crc kubenswrapper[4851]: I0223 13:27:45.986514 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0fe9c5-dec4-4c19-b7c7-d9878be435a4" path="/var/lib/kubelet/pods/aa0fe9c5-dec4-4c19-b7c7-d9878be435a4/volumes" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.056962 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-combined-ca-bundle\") pod \"275f852c-2061-4175-bc10-0b502e44e587\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.057042 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-ovndb-tls-certs\") pod \"275f852c-2061-4175-bc10-0b502e44e587\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.057140 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9v24\" (UniqueName: \"kubernetes.io/projected/275f852c-2061-4175-bc10-0b502e44e587-kube-api-access-t9v24\") pod \"275f852c-2061-4175-bc10-0b502e44e587\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.057167 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-httpd-config\") pod \"275f852c-2061-4175-bc10-0b502e44e587\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.057212 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-internal-tls-certs\") pod \"275f852c-2061-4175-bc10-0b502e44e587\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.057231 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-public-tls-certs\") pod \"275f852c-2061-4175-bc10-0b502e44e587\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.057294 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-config\") pod \"275f852c-2061-4175-bc10-0b502e44e587\" (UID: \"275f852c-2061-4175-bc10-0b502e44e587\") " Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.068626 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275f852c-2061-4175-bc10-0b502e44e587-kube-api-access-t9v24" (OuterVolumeSpecName: "kube-api-access-t9v24") pod "275f852c-2061-4175-bc10-0b502e44e587" (UID: "275f852c-2061-4175-bc10-0b502e44e587"). InnerVolumeSpecName "kube-api-access-t9v24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.096496 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "275f852c-2061-4175-bc10-0b502e44e587" (UID: "275f852c-2061-4175-bc10-0b502e44e587"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.162636 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9v24\" (UniqueName: \"kubernetes.io/projected/275f852c-2061-4175-bc10-0b502e44e587-kube-api-access-t9v24\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.162673 4851 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.196416 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-config" (OuterVolumeSpecName: "config") pod "275f852c-2061-4175-bc10-0b502e44e587" (UID: "275f852c-2061-4175-bc10-0b502e44e587"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.222368 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "275f852c-2061-4175-bc10-0b502e44e587" (UID: "275f852c-2061-4175-bc10-0b502e44e587"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.264451 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.264480 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.265557 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "275f852c-2061-4175-bc10-0b502e44e587" (UID: "275f852c-2061-4175-bc10-0b502e44e587"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.283675 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "275f852c-2061-4175-bc10-0b502e44e587" (UID: "275f852c-2061-4175-bc10-0b502e44e587"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.310445 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "275f852c-2061-4175-bc10-0b502e44e587" (UID: "275f852c-2061-4175-bc10-0b502e44e587"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.366477 4851 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.366511 4851 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.366520 4851 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/275f852c-2061-4175-bc10-0b502e44e587-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.667786 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerStarted","Data":"08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2"} Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.668102 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerStarted","Data":"00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181"} Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.669755 4851 generic.go:334] "Generic (PLEG): container finished" podID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerID="099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d" exitCode=143 Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.669834 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-769d54b998-47sgr" event={"ID":"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511","Type":"ContainerDied","Data":"099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d"} Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.672941 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c260317a-0cb6-475e-b780-50f6de86dda2","Type":"ContainerStarted","Data":"b06f67e054301d2e1f248bc201219356c47b61e6e83ef7f2340bc33933dcffd9"} Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.673128 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.675934 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cf9f55d6f-4t6cw" event={"ID":"275f852c-2061-4175-bc10-0b502e44e587","Type":"ContainerDied","Data":"2bdbd7caecfb0a5f62b765d64a6469ae3eb85a4655fc378595c991b8e9f0dfa6"} Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.675986 4851 scope.go:117] "RemoveContainer" containerID="d9c34003fc467639e156191b838c11e289a33a1f0ad1869c9a50667c117a2240" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.676131 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cf9f55d6f-4t6cw" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.696243 4851 scope.go:117] "RemoveContainer" containerID="dc004bf07f04d3099dac40e6deb9ba3607ee5f3d136950a7dcb698f80b934aac" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.696916 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.696895727 podStartE2EDuration="5.696895727s" podCreationTimestamp="2026-02-23 13:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:46.691160345 +0000 UTC m=+1221.372864043" watchObservedRunningTime="2026-02-23 13:27:46.696895727 +0000 UTC m=+1221.378599405" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.722604 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cf9f55d6f-4t6cw"] Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.741825 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cf9f55d6f-4t6cw"] Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.816524 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-64f4c4f478-f578z" Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.914814 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69f9fbd4d-lldd8"] Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.915261 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69f9fbd4d-lldd8" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon-log" containerID="cri-o://292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4" gracePeriod=30 Feb 23 13:27:46 crc kubenswrapper[4851]: I0223 13:27:46.915603 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69f9fbd4d-lldd8" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" containerID="cri-o://f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41" gracePeriod=30 Feb 23 13:27:47 crc kubenswrapper[4851]: I0223 13:27:47.059260 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69f9fbd4d-lldd8" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 23 13:27:47 crc kubenswrapper[4851]: I0223 13:27:47.557489 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:27:47 crc kubenswrapper[4851]: I0223 13:27:47.640175 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cb5wt"] Feb 23 13:27:47 crc kubenswrapper[4851]: I0223 13:27:47.641904 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" podUID="8b4db413-40fb-450b-8e21-445e63d1963c" containerName="dnsmasq-dns" containerID="cri-o://e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301" gracePeriod=10 Feb 23 13:27:47 crc kubenswrapper[4851]: I0223 13:27:47.697602 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerStarted","Data":"b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93"} Feb 23 13:27:47 crc kubenswrapper[4851]: I0223 13:27:47.809558 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 13:27:47 crc kubenswrapper[4851]: I0223 13:27:47.847046 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 13:27:47 crc kubenswrapper[4851]: I0223 13:27:47.978692 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275f852c-2061-4175-bc10-0b502e44e587" path="/var/lib/kubelet/pods/275f852c-2061-4175-bc10-0b502e44e587/volumes" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.248811 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.327116 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-swift-storage-0\") pod \"8b4db413-40fb-450b-8e21-445e63d1963c\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.327166 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-sb\") pod \"8b4db413-40fb-450b-8e21-445e63d1963c\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.327189 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-config\") pod \"8b4db413-40fb-450b-8e21-445e63d1963c\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.327373 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-svc\") pod \"8b4db413-40fb-450b-8e21-445e63d1963c\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.327473 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4nxs\" (UniqueName: \"kubernetes.io/projected/8b4db413-40fb-450b-8e21-445e63d1963c-kube-api-access-r4nxs\") pod \"8b4db413-40fb-450b-8e21-445e63d1963c\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.327500 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-nb\") pod \"8b4db413-40fb-450b-8e21-445e63d1963c\" (UID: \"8b4db413-40fb-450b-8e21-445e63d1963c\") " Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.361648 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b4db413-40fb-450b-8e21-445e63d1963c-kube-api-access-r4nxs" (OuterVolumeSpecName: "kube-api-access-r4nxs") pod "8b4db413-40fb-450b-8e21-445e63d1963c" (UID: "8b4db413-40fb-450b-8e21-445e63d1963c"). InnerVolumeSpecName "kube-api-access-r4nxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.393275 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b4db413-40fb-450b-8e21-445e63d1963c" (UID: "8b4db413-40fb-450b-8e21-445e63d1963c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.397517 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b4db413-40fb-450b-8e21-445e63d1963c" (UID: "8b4db413-40fb-450b-8e21-445e63d1963c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.407226 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8b4db413-40fb-450b-8e21-445e63d1963c" (UID: "8b4db413-40fb-450b-8e21-445e63d1963c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.430794 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4nxs\" (UniqueName: \"kubernetes.io/projected/8b4db413-40fb-450b-8e21-445e63d1963c-kube-api-access-r4nxs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.431110 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.431126 4851 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.431137 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.431896 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-config" (OuterVolumeSpecName: "config") pod "8b4db413-40fb-450b-8e21-445e63d1963c" (UID: "8b4db413-40fb-450b-8e21-445e63d1963c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.439425 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b4db413-40fb-450b-8e21-445e63d1963c" (UID: "8b4db413-40fb-450b-8e21-445e63d1963c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.532213 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.532257 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b4db413-40fb-450b-8e21-445e63d1963c-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.706651 4851 generic.go:334] "Generic (PLEG): container finished" podID="8b4db413-40fb-450b-8e21-445e63d1963c" containerID="e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301" exitCode=0 Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.706894 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="50131779-f495-424b-85b7-24da5b37882d" containerName="cinder-scheduler" containerID="cri-o://34f95234aa4798154e16e4d120ba82ae40979c014710eeb1d7fde036bce50e98" gracePeriod=30 Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.707245 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.710437 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" event={"ID":"8b4db413-40fb-450b-8e21-445e63d1963c","Type":"ContainerDied","Data":"e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301"} Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.710469 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-cb5wt" event={"ID":"8b4db413-40fb-450b-8e21-445e63d1963c","Type":"ContainerDied","Data":"50d8494090c54e623b90165bcb5b60926161eba2077bd2354e23c58b728222cf"} Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.710485 4851 scope.go:117] "RemoveContainer" containerID="e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.710805 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="50131779-f495-424b-85b7-24da5b37882d" containerName="probe" containerID="cri-o://4b028227f0db97f49850a8fe257dad24818dbd0abbeff19e9328bd2f85c6700f" gracePeriod=30 Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.736569 4851 scope.go:117] "RemoveContainer" containerID="45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.745534 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cb5wt"] Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.752658 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-cb5wt"] Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.766670 4851 scope.go:117] "RemoveContainer" containerID="e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301" Feb 23 13:27:48 crc kubenswrapper[4851]: E0223 13:27:48.767020 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301\": container with ID starting with e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301 not found: ID does not exist" containerID="e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.767047 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301"} err="failed to get container status \"e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301\": rpc error: code = NotFound desc = could not find container \"e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301\": container with ID starting with e5bdcc4d9b023f5f970323a72c9b55abae5b013b9d37c48066c3e7c6a269d301 not found: ID does not exist" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.767064 4851 scope.go:117] "RemoveContainer" containerID="45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7" Feb 23 13:27:48 crc kubenswrapper[4851]: E0223 13:27:48.767400 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7\": container with ID starting with 45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7 not found: ID does not exist" containerID="45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7" Feb 23 13:27:48 crc kubenswrapper[4851]: I0223 13:27:48.767461 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7"} err="failed to get container status \"45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7\": rpc error: code = NotFound desc = could not find container \"45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7\": container with ID starting with 45b0b4c3952104d2e85c689068ea42f54834e086b4e01a98e5c1932a1be3cfa7 not found: ID does not exist" Feb 23 13:27:49 crc kubenswrapper[4851]: I0223 13:27:49.717258 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerStarted","Data":"ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4"} Feb 23 13:27:49 crc kubenswrapper[4851]: I0223 13:27:49.717600 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 13:27:49 crc kubenswrapper[4851]: I0223 13:27:49.721457 4851 generic.go:334] "Generic (PLEG): container finished" podID="50131779-f495-424b-85b7-24da5b37882d" containerID="4b028227f0db97f49850a8fe257dad24818dbd0abbeff19e9328bd2f85c6700f" exitCode=0 Feb 23 13:27:49 crc kubenswrapper[4851]: I0223 13:27:49.721530 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50131779-f495-424b-85b7-24da5b37882d","Type":"ContainerDied","Data":"4b028227f0db97f49850a8fe257dad24818dbd0abbeff19e9328bd2f85c6700f"} Feb 23 13:27:49 crc kubenswrapper[4851]: I0223 13:27:49.748901 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.376389739 podStartE2EDuration="6.748881616s" podCreationTimestamp="2026-02-23 13:27:43 +0000 UTC" firstStartedPulling="2026-02-23 13:27:44.642435878 +0000 UTC m=+1219.324139556" lastFinishedPulling="2026-02-23 13:27:49.014927755 +0000 UTC m=+1223.696631433" observedRunningTime="2026-02-23 13:27:49.741461537 +0000 UTC m=+1224.423165215" watchObservedRunningTime="2026-02-23 13:27:49.748881616 +0000 UTC m=+1224.430585294" Feb 23 13:27:49 crc kubenswrapper[4851]: I0223 13:27:49.980140 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b4db413-40fb-450b-8e21-445e63d1963c" path="/var/lib/kubelet/pods/8b4db413-40fb-450b-8e21-445e63d1963c/volumes" Feb 23 13:27:50 crc kubenswrapper[4851]: I0223 13:27:50.797861 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-b7f866994-tdwdz" Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.160613 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-769d54b998-47sgr" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.160955 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-769d54b998-47sgr" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.467389 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69f9fbd4d-lldd8" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:56504->10.217.0.149:8443: read: connection reset by peer" Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.749520 4851 generic.go:334] "Generic (PLEG): container finished" podID="50131779-f495-424b-85b7-24da5b37882d" containerID="34f95234aa4798154e16e4d120ba82ae40979c014710eeb1d7fde036bce50e98" exitCode=0 Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.749572 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50131779-f495-424b-85b7-24da5b37882d","Type":"ContainerDied","Data":"34f95234aa4798154e16e4d120ba82ae40979c014710eeb1d7fde036bce50e98"} Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.751409 4851 generic.go:334] "Generic (PLEG): container finished" podID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerID="f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41" exitCode=0 Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.751462 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f9fbd4d-lldd8" event={"ID":"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32","Type":"ContainerDied","Data":"f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41"} Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.864117 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.960566 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-769d54b998-47sgr" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:46436->10.217.0.162:9311: read: connection reset by peer" Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.960883 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-769d54b998-47sgr" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:46452->10.217.0.162:9311: read: connection reset by peer" Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.994564 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-combined-ca-bundle\") pod \"50131779-f495-424b-85b7-24da5b37882d\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.994684 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data\") pod \"50131779-f495-424b-85b7-24da5b37882d\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.994763 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7v5x\" (UniqueName: \"kubernetes.io/projected/50131779-f495-424b-85b7-24da5b37882d-kube-api-access-r7v5x\") pod \"50131779-f495-424b-85b7-24da5b37882d\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.994866 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data-custom\") pod \"50131779-f495-424b-85b7-24da5b37882d\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.994902 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-scripts\") pod \"50131779-f495-424b-85b7-24da5b37882d\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.994922 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50131779-f495-424b-85b7-24da5b37882d-etc-machine-id\") pod \"50131779-f495-424b-85b7-24da5b37882d\" (UID: \"50131779-f495-424b-85b7-24da5b37882d\") " Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.995167 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50131779-f495-424b-85b7-24da5b37882d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "50131779-f495-424b-85b7-24da5b37882d" (UID: "50131779-f495-424b-85b7-24da5b37882d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 13:27:51 crc kubenswrapper[4851]: I0223 13:27:51.996003 4851 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50131779-f495-424b-85b7-24da5b37882d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.000246 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-scripts" (OuterVolumeSpecName: "scripts") pod "50131779-f495-424b-85b7-24da5b37882d" (UID: "50131779-f495-424b-85b7-24da5b37882d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.000259 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50131779-f495-424b-85b7-24da5b37882d-kube-api-access-r7v5x" (OuterVolumeSpecName: "kube-api-access-r7v5x") pod "50131779-f495-424b-85b7-24da5b37882d" (UID: "50131779-f495-424b-85b7-24da5b37882d"). InnerVolumeSpecName "kube-api-access-r7v5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.003459 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "50131779-f495-424b-85b7-24da5b37882d" (UID: "50131779-f495-424b-85b7-24da5b37882d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.046439 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50131779-f495-424b-85b7-24da5b37882d" (UID: "50131779-f495-424b-85b7-24da5b37882d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.091680 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data" (OuterVolumeSpecName: "config-data") pod "50131779-f495-424b-85b7-24da5b37882d" (UID: "50131779-f495-424b-85b7-24da5b37882d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.098117 4851 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.098259 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.098378 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.098456 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50131779-f495-424b-85b7-24da5b37882d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.098524 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7v5x\" (UniqueName: \"kubernetes.io/projected/50131779-f495-424b-85b7-24da5b37882d-kube-api-access-r7v5x\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.145656 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69f9fbd4d-lldd8" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.305916 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.306307 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4db413-40fb-450b-8e21-445e63d1963c" containerName="init" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306341 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4db413-40fb-450b-8e21-445e63d1963c" containerName="init" Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.306377 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50131779-f495-424b-85b7-24da5b37882d" containerName="cinder-scheduler" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306388 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="50131779-f495-424b-85b7-24da5b37882d" containerName="cinder-scheduler" Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.306401 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50131779-f495-424b-85b7-24da5b37882d" containerName="probe" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306407 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="50131779-f495-424b-85b7-24da5b37882d" containerName="probe" Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.306419 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b4db413-40fb-450b-8e21-445e63d1963c" containerName="dnsmasq-dns" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306424 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b4db413-40fb-450b-8e21-445e63d1963c" containerName="dnsmasq-dns" Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.306437 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-api" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306443 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-api" Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.306455 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-httpd" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306461 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-httpd" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306617 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b4db413-40fb-450b-8e21-445e63d1963c" containerName="dnsmasq-dns" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306627 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="50131779-f495-424b-85b7-24da5b37882d" containerName="probe" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306645 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="50131779-f495-424b-85b7-24da5b37882d" containerName="cinder-scheduler" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306654 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-httpd" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.306661 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="275f852c-2061-4175-bc10-0b502e44e587" containerName="neutron-api" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.307238 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.312948 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.313304 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.313924 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-2ctz5" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.339390 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.346738 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.403971 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data-custom\") pod \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.404493 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6jhv\" (UniqueName: \"kubernetes.io/projected/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-kube-api-access-n6jhv\") pod \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.404772 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnh4x\" (UniqueName: \"kubernetes.io/projected/86b670f3-6886-4a48-b0ec-a109e93c87a0-kube-api-access-cnh4x\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.404801 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b670f3-6886-4a48-b0ec-a109e93c87a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.404913 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/86b670f3-6886-4a48-b0ec-a109e93c87a0-openstack-config\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.404947 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/86b670f3-6886-4a48-b0ec-a109e93c87a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.408929 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" (UID: "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.409237 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-kube-api-access-n6jhv" (OuterVolumeSpecName: "kube-api-access-n6jhv") pod "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" (UID: "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511"). InnerVolumeSpecName "kube-api-access-n6jhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.506288 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-combined-ca-bundle\") pod \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.506568 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data\") pod \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.506715 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-logs\") pod \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\" (UID: \"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511\") " Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.506911 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/86b670f3-6886-4a48-b0ec-a109e93c87a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.506980 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnh4x\" (UniqueName: \"kubernetes.io/projected/86b670f3-6886-4a48-b0ec-a109e93c87a0-kube-api-access-cnh4x\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.507725 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-logs" (OuterVolumeSpecName: "logs") pod "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" (UID: "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.507806 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b670f3-6886-4a48-b0ec-a109e93c87a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.508665 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/86b670f3-6886-4a48-b0ec-a109e93c87a0-openstack-config\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.508898 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6jhv\" (UniqueName: \"kubernetes.io/projected/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-kube-api-access-n6jhv\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.508983 4851 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.509066 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.509926 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/86b670f3-6886-4a48-b0ec-a109e93c87a0-openstack-config\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.512033 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/86b670f3-6886-4a48-b0ec-a109e93c87a0-openstack-config-secret\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.521981 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b670f3-6886-4a48-b0ec-a109e93c87a0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.525716 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnh4x\" (UniqueName: \"kubernetes.io/projected/86b670f3-6886-4a48-b0ec-a109e93c87a0-kube-api-access-cnh4x\") pod \"openstackclient\" (UID: \"86b670f3-6886-4a48-b0ec-a109e93c87a0\") " pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.536649 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" (UID: "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.557684 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data" (OuterVolumeSpecName: "config-data") pod "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" (UID: "95cf51a5-1d34-4bd8-afd7-f5e03f6a3511"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.610656 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.610696 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.632993 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.765760 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50131779-f495-424b-85b7-24da5b37882d","Type":"ContainerDied","Data":"a6b0222b24c9b1926ff5b26e420d428f7d1991a106cfd2d84d9cc519c2346ba9"} Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.766031 4851 scope.go:117] "RemoveContainer" containerID="4b028227f0db97f49850a8fe257dad24818dbd0abbeff19e9328bd2f85c6700f" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.766157 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.798503 4851 generic.go:334] "Generic (PLEG): container finished" podID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerID="64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a" exitCode=0 Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.798544 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-769d54b998-47sgr" event={"ID":"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511","Type":"ContainerDied","Data":"64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a"} Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.798584 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-769d54b998-47sgr" event={"ID":"95cf51a5-1d34-4bd8-afd7-f5e03f6a3511","Type":"ContainerDied","Data":"58c2271b56e4dc6dbedd7f61419e9dfe3997ff7506a0c8910ad5ab727669a5f5"} Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.798662 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-769d54b998-47sgr" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.829016 4851 scope.go:117] "RemoveContainer" containerID="34f95234aa4798154e16e4d120ba82ae40979c014710eeb1d7fde036bce50e98" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.851447 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.873532 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.879495 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.879993 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.880016 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api" Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.880041 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api-log" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.880049 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api-log" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.880236 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api-log" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.880250 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" containerName="barbican-api" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.881166 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.883702 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.900808 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.908597 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-769d54b998-47sgr"] Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.920732 4851 scope.go:117] "RemoveContainer" containerID="64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.923092 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-769d54b998-47sgr"] Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.947390 4851 scope.go:117] "RemoveContainer" containerID="099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.970797 4851 scope.go:117] "RemoveContainer" containerID="64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a" Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.972874 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a\": container with ID starting with 64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a not found: ID does not exist" containerID="64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.972905 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a"} err="failed to get container status \"64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a\": rpc error: code = NotFound desc = could not find container \"64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a\": container with ID starting with 64f0e4ed055dbaa8c98158b6db9acb9b6ad312a21d38c79e830fa8481229b41a not found: ID does not exist" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.972924 4851 scope.go:117] "RemoveContainer" containerID="099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d" Feb 23 13:27:52 crc kubenswrapper[4851]: E0223 13:27:52.973130 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d\": container with ID starting with 099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d not found: ID does not exist" containerID="099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d" Feb 23 13:27:52 crc kubenswrapper[4851]: I0223 13:27:52.973152 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d"} err="failed to get container status \"099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d\": rpc error: code = NotFound desc = could not find container \"099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d\": container with ID starting with 099b07c17a350f9164d30986a88f77ce665d7f4ff365df44d6308841883eec5d not found: ID does not exist" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.017721 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-scripts\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.017898 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.017926 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-config-data\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.017963 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wct\" (UniqueName: \"kubernetes.io/projected/2894d16c-17aa-4037-afa2-37081858ab01-kube-api-access-p4wct\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.018084 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2894d16c-17aa-4037-afa2-37081858ab01-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.018181 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.119585 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.119623 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-config-data\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.119668 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4wct\" (UniqueName: \"kubernetes.io/projected/2894d16c-17aa-4037-afa2-37081858ab01-kube-api-access-p4wct\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.119692 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2894d16c-17aa-4037-afa2-37081858ab01-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.119729 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.119879 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-scripts\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.120815 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2894d16c-17aa-4037-afa2-37081858ab01-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.124272 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-scripts\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.124725 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.135655 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.138249 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2894d16c-17aa-4037-afa2-37081858ab01-config-data\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.139439 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4wct\" (UniqueName: \"kubernetes.io/projected/2894d16c-17aa-4037-afa2-37081858ab01-kube-api-access-p4wct\") pod \"cinder-scheduler-0\" (UID: \"2894d16c-17aa-4037-afa2-37081858ab01\") " pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.189293 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 13:27:53 crc kubenswrapper[4851]: W0223 13:27:53.190528 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86b670f3_6886_4a48_b0ec_a109e93c87a0.slice/crio-849edd9dea49e8a5d8397f8fb5dd2a4d4a7539dc19195bf2dc939262a6cb55e6 WatchSource:0}: Error finding container 849edd9dea49e8a5d8397f8fb5dd2a4d4a7539dc19195bf2dc939262a6cb55e6: Status 404 returned error can't find the container with id 849edd9dea49e8a5d8397f8fb5dd2a4d4a7539dc19195bf2dc939262a6cb55e6 Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.213270 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.655396 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.816480 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2894d16c-17aa-4037-afa2-37081858ab01","Type":"ContainerStarted","Data":"41d21c7f33f16925081cb5a30b0b1cf68c53e68d500f3df625f0caa62a4d54e0"} Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.817638 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"86b670f3-6886-4a48-b0ec-a109e93c87a0","Type":"ContainerStarted","Data":"849edd9dea49e8a5d8397f8fb5dd2a4d4a7539dc19195bf2dc939262a6cb55e6"} Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.985410 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50131779-f495-424b-85b7-24da5b37882d" path="/var/lib/kubelet/pods/50131779-f495-424b-85b7-24da5b37882d/volumes" Feb 23 13:27:53 crc kubenswrapper[4851]: I0223 13:27:53.986668 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cf51a5-1d34-4bd8-afd7-f5e03f6a3511" path="/var/lib/kubelet/pods/95cf51a5-1d34-4bd8-afd7-f5e03f6a3511/volumes" Feb 23 13:27:54 crc kubenswrapper[4851]: I0223 13:27:54.124977 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 23 13:27:54 crc kubenswrapper[4851]: I0223 13:27:54.840251 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2894d16c-17aa-4037-afa2-37081858ab01","Type":"ContainerStarted","Data":"872ecd90b5adff9d85afacdf3ab3bb1802701447ff1e0e9d1d2c847bb0305e3b"} Feb 23 13:27:55 crc kubenswrapper[4851]: I0223 13:27:55.850450 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2894d16c-17aa-4037-afa2-37081858ab01","Type":"ContainerStarted","Data":"4c10033688a540890977d2cf88491bfa6b7e1a0b46a42a9f55cd5ccea2e2a3dc"} Feb 23 13:27:55 crc kubenswrapper[4851]: I0223 13:27:55.866441 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.8664203 podStartE2EDuration="3.8664203s" podCreationTimestamp="2026-02-23 13:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:55.864676181 +0000 UTC m=+1230.546379869" watchObservedRunningTime="2026-02-23 13:27:55.8664203 +0000 UTC m=+1230.548123978" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.530288 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5ccf5dc859-8drcp"] Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.542659 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.549221 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5ccf5dc859-8drcp"] Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.549994 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.550168 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.550394 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.688670 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksww7\" (UniqueName: \"kubernetes.io/projected/70c040ea-0409-4501-9416-f1f40c5c6882-kube-api-access-ksww7\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.688947 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70c040ea-0409-4501-9416-f1f40c5c6882-log-httpd\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.689005 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-combined-ca-bundle\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.689088 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-internal-tls-certs\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.689139 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70c040ea-0409-4501-9416-f1f40c5c6882-etc-swift\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.689158 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-public-tls-certs\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.689350 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70c040ea-0409-4501-9416-f1f40c5c6882-run-httpd\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.689393 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-config-data\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.791412 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-internal-tls-certs\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.791468 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70c040ea-0409-4501-9416-f1f40c5c6882-etc-swift\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.791560 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-public-tls-certs\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.792466 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70c040ea-0409-4501-9416-f1f40c5c6882-run-httpd\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.792512 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-config-data\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.792641 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksww7\" (UniqueName: \"kubernetes.io/projected/70c040ea-0409-4501-9416-f1f40c5c6882-kube-api-access-ksww7\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.792865 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70c040ea-0409-4501-9416-f1f40c5c6882-log-httpd\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.792938 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-combined-ca-bundle\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.793042 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70c040ea-0409-4501-9416-f1f40c5c6882-run-httpd\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.793801 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70c040ea-0409-4501-9416-f1f40c5c6882-log-httpd\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.799502 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-config-data\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.799385 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-internal-tls-certs\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.800610 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70c040ea-0409-4501-9416-f1f40c5c6882-etc-swift\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.801573 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-combined-ca-bundle\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.809024 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c040ea-0409-4501-9416-f1f40c5c6882-public-tls-certs\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.818959 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksww7\" (UniqueName: \"kubernetes.io/projected/70c040ea-0409-4501-9416-f1f40c5c6882-kube-api-access-ksww7\") pod \"swift-proxy-5ccf5dc859-8drcp\" (UID: \"70c040ea-0409-4501-9416-f1f40c5c6882\") " pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:56 crc kubenswrapper[4851]: I0223 13:27:56.875724 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.039505 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.099731 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d6d46b468-7drjb" Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.465656 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5ccf5dc859-8drcp"] Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.487594 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.488243 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="ceilometer-central-agent" containerID="cri-o://00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181" gracePeriod=30 Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.488446 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="ceilometer-notification-agent" containerID="cri-o://08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2" gracePeriod=30 Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.488491 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="sg-core" containerID="cri-o://b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93" gracePeriod=30 Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.488408 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="proxy-httpd" containerID="cri-o://ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4" gracePeriod=30 Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.886342 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5ccf5dc859-8drcp" event={"ID":"70c040ea-0409-4501-9416-f1f40c5c6882","Type":"ContainerStarted","Data":"06da956a54f7a79777083a245c9b3532b85789d34742ae0939c927e73c2426d7"} Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.886671 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5ccf5dc859-8drcp" event={"ID":"70c040ea-0409-4501-9416-f1f40c5c6882","Type":"ContainerStarted","Data":"91fd2c1b797af0e996ae158cec8571ff5e8a7e0b660c45b5f2fc4e4e53fdf5f7"} Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.893652 4851 generic.go:334] "Generic (PLEG): container finished" podID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerID="ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4" exitCode=0 Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.893693 4851 generic.go:334] "Generic (PLEG): container finished" podID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerID="b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93" exitCode=2 Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.894752 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerDied","Data":"ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4"} Feb 23 13:27:57 crc kubenswrapper[4851]: I0223 13:27:57.894789 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerDied","Data":"b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93"} Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.214346 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.329670 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.438168 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-scripts\") pod \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.438477 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tcxz\" (UniqueName: \"kubernetes.io/projected/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-kube-api-access-5tcxz\") pod \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.438528 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-sg-core-conf-yaml\") pod \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.438570 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-config-data\") pod \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.438600 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-log-httpd\") pod \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.438618 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-run-httpd\") pod \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.438736 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-combined-ca-bundle\") pod \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\" (UID: \"97bb1048-c2f0-4b91-b07e-5a2fa6d40947\") " Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.439122 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "97bb1048-c2f0-4b91-b07e-5a2fa6d40947" (UID: "97bb1048-c2f0-4b91-b07e-5a2fa6d40947"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.439379 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "97bb1048-c2f0-4b91-b07e-5a2fa6d40947" (UID: "97bb1048-c2f0-4b91-b07e-5a2fa6d40947"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.444798 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-scripts" (OuterVolumeSpecName: "scripts") pod "97bb1048-c2f0-4b91-b07e-5a2fa6d40947" (UID: "97bb1048-c2f0-4b91-b07e-5a2fa6d40947"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.446267 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-kube-api-access-5tcxz" (OuterVolumeSpecName: "kube-api-access-5tcxz") pod "97bb1048-c2f0-4b91-b07e-5a2fa6d40947" (UID: "97bb1048-c2f0-4b91-b07e-5a2fa6d40947"). InnerVolumeSpecName "kube-api-access-5tcxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.505121 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "97bb1048-c2f0-4b91-b07e-5a2fa6d40947" (UID: "97bb1048-c2f0-4b91-b07e-5a2fa6d40947"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.540603 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tcxz\" (UniqueName: \"kubernetes.io/projected/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-kube-api-access-5tcxz\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.540637 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.540646 4851 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.540655 4851 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.540664 4851 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.554405 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97bb1048-c2f0-4b91-b07e-5a2fa6d40947" (UID: "97bb1048-c2f0-4b91-b07e-5a2fa6d40947"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.616887 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-config-data" (OuterVolumeSpecName: "config-data") pod "97bb1048-c2f0-4b91-b07e-5a2fa6d40947" (UID: "97bb1048-c2f0-4b91-b07e-5a2fa6d40947"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.642283 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.642321 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97bb1048-c2f0-4b91-b07e-5a2fa6d40947-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.914322 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5ccf5dc859-8drcp" event={"ID":"70c040ea-0409-4501-9416-f1f40c5c6882","Type":"ContainerStarted","Data":"7d572215d9156541f5dbfdaa11538c737aef2726a6ae4e371349d839193db7e4"} Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.914663 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.914683 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.925187 4851 generic.go:334] "Generic (PLEG): container finished" podID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerID="08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2" exitCode=0 Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.925227 4851 generic.go:334] "Generic (PLEG): container finished" podID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerID="00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181" exitCode=0 Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.925229 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.925247 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerDied","Data":"08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2"} Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.925278 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerDied","Data":"00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181"} Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.925289 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"97bb1048-c2f0-4b91-b07e-5a2fa6d40947","Type":"ContainerDied","Data":"158aefafb498cde1de9ff3649aa3adc61437a33af8aac2f18707d64c1d2e9525"} Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.925303 4851 scope.go:117] "RemoveContainer" containerID="ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.938310 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5ccf5dc859-8drcp" podStartSLOduration=2.938294028 podStartE2EDuration="2.938294028s" podCreationTimestamp="2026-02-23 13:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:27:58.936178179 +0000 UTC m=+1233.617881857" watchObservedRunningTime="2026-02-23 13:27:58.938294028 +0000 UTC m=+1233.619997706" Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.973143 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:58 crc kubenswrapper[4851]: I0223 13:27:58.983259 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.001226 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:59 crc kubenswrapper[4851]: E0223 13:27:59.001634 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="ceilometer-notification-agent" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.001652 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="ceilometer-notification-agent" Feb 23 13:27:59 crc kubenswrapper[4851]: E0223 13:27:59.001682 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="proxy-httpd" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.001689 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="proxy-httpd" Feb 23 13:27:59 crc kubenswrapper[4851]: E0223 13:27:59.001702 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="sg-core" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.001708 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="sg-core" Feb 23 13:27:59 crc kubenswrapper[4851]: E0223 13:27:59.001725 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="ceilometer-central-agent" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.001731 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="ceilometer-central-agent" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.001886 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="proxy-httpd" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.001897 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="ceilometer-central-agent" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.001906 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="ceilometer-notification-agent" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.001924 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" containerName="sg-core" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.003436 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.005931 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.006115 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.015503 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.048126 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-config-data\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.048186 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/1bf281ab-4f4c-447f-8b99-eee662974a92-kube-api-access-xx5tc\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.048252 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.048290 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-log-httpd\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.048411 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-scripts\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.048446 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-run-httpd\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.048479 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.150360 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/1bf281ab-4f4c-447f-8b99-eee662974a92-kube-api-access-xx5tc\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.150431 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.150481 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-log-httpd\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.150556 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-scripts\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.150587 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-run-httpd\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.150615 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.150637 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-config-data\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.151456 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-log-httpd\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.151657 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-run-httpd\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.181493 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.182385 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-config-data\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.190654 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-scripts\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.195067 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/1bf281ab-4f4c-447f-8b99-eee662974a92-kube-api-access-xx5tc\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.196076 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.333619 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:27:59 crc kubenswrapper[4851]: I0223 13:27:59.994000 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97bb1048-c2f0-4b91-b07e-5a2fa6d40947" path="/var/lib/kubelet/pods/97bb1048-c2f0-4b91-b07e-5a2fa6d40947/volumes" Feb 23 13:28:02 crc kubenswrapper[4851]: I0223 13:28:02.141761 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69f9fbd4d-lldd8" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.259804 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-fsh9d"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.261556 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.293232 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fsh9d"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.325990 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86804859-3d9d-4fd8-9b36-f74c751d795f-operator-scripts\") pod \"nova-api-db-create-fsh9d\" (UID: \"86804859-3d9d-4fd8-9b36-f74c751d795f\") " pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.326109 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfpkq\" (UniqueName: \"kubernetes.io/projected/86804859-3d9d-4fd8-9b36-f74c751d795f-kube-api-access-qfpkq\") pod \"nova-api-db-create-fsh9d\" (UID: \"86804859-3d9d-4fd8-9b36-f74c751d795f\") " pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.360844 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-sntrg"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.361908 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.390187 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7a31-account-create-update-j7zdq"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.391318 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.393276 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.410393 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-sntrg"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.420277 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7a31-account-create-update-j7zdq"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.427997 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86804859-3d9d-4fd8-9b36-f74c751d795f-operator-scripts\") pod \"nova-api-db-create-fsh9d\" (UID: \"86804859-3d9d-4fd8-9b36-f74c751d795f\") " pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.428094 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj6qk\" (UniqueName: \"kubernetes.io/projected/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-kube-api-access-tj6qk\") pod \"nova-api-7a31-account-create-update-j7zdq\" (UID: \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\") " pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.428151 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfpkq\" (UniqueName: \"kubernetes.io/projected/86804859-3d9d-4fd8-9b36-f74c751d795f-kube-api-access-qfpkq\") pod \"nova-api-db-create-fsh9d\" (UID: \"86804859-3d9d-4fd8-9b36-f74c751d795f\") " pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.428208 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd8xh\" (UniqueName: \"kubernetes.io/projected/e9dbd21e-9573-4caa-83d3-ed709dc66748-kube-api-access-qd8xh\") pod \"nova-cell0-db-create-sntrg\" (UID: \"e9dbd21e-9573-4caa-83d3-ed709dc66748\") " pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.428240 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dbd21e-9573-4caa-83d3-ed709dc66748-operator-scripts\") pod \"nova-cell0-db-create-sntrg\" (UID: \"e9dbd21e-9573-4caa-83d3-ed709dc66748\") " pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.428267 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-operator-scripts\") pod \"nova-api-7a31-account-create-update-j7zdq\" (UID: \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\") " pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.430664 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86804859-3d9d-4fd8-9b36-f74c751d795f-operator-scripts\") pod \"nova-api-db-create-fsh9d\" (UID: \"86804859-3d9d-4fd8-9b36-f74c751d795f\") " pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.473587 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-z9xt4"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.475073 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.479062 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfpkq\" (UniqueName: \"kubernetes.io/projected/86804859-3d9d-4fd8-9b36-f74c751d795f-kube-api-access-qfpkq\") pod \"nova-api-db-create-fsh9d\" (UID: \"86804859-3d9d-4fd8-9b36-f74c751d795f\") " pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.488827 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z9xt4"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.532441 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm4w6\" (UniqueName: \"kubernetes.io/projected/000f7332-5032-4668-8151-d5235db27f97-kube-api-access-nm4w6\") pod \"nova-cell1-db-create-z9xt4\" (UID: \"000f7332-5032-4668-8151-d5235db27f97\") " pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.532528 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj6qk\" (UniqueName: \"kubernetes.io/projected/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-kube-api-access-tj6qk\") pod \"nova-api-7a31-account-create-update-j7zdq\" (UID: \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\") " pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.532715 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd8xh\" (UniqueName: \"kubernetes.io/projected/e9dbd21e-9573-4caa-83d3-ed709dc66748-kube-api-access-qd8xh\") pod \"nova-cell0-db-create-sntrg\" (UID: \"e9dbd21e-9573-4caa-83d3-ed709dc66748\") " pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.532759 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dbd21e-9573-4caa-83d3-ed709dc66748-operator-scripts\") pod \"nova-cell0-db-create-sntrg\" (UID: \"e9dbd21e-9573-4caa-83d3-ed709dc66748\") " pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.532809 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-operator-scripts\") pod \"nova-api-7a31-account-create-update-j7zdq\" (UID: \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\") " pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.532908 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/000f7332-5032-4668-8151-d5235db27f97-operator-scripts\") pod \"nova-cell1-db-create-z9xt4\" (UID: \"000f7332-5032-4668-8151-d5235db27f97\") " pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.533921 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dbd21e-9573-4caa-83d3-ed709dc66748-operator-scripts\") pod \"nova-cell0-db-create-sntrg\" (UID: \"e9dbd21e-9573-4caa-83d3-ed709dc66748\") " pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.533941 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-operator-scripts\") pod \"nova-api-7a31-account-create-update-j7zdq\" (UID: \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\") " pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.538351 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.573318 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj6qk\" (UniqueName: \"kubernetes.io/projected/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-kube-api-access-tj6qk\") pod \"nova-api-7a31-account-create-update-j7zdq\" (UID: \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\") " pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.575323 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd8xh\" (UniqueName: \"kubernetes.io/projected/e9dbd21e-9573-4caa-83d3-ed709dc66748-kube-api-access-qd8xh\") pod \"nova-cell0-db-create-sntrg\" (UID: \"e9dbd21e-9573-4caa-83d3-ed709dc66748\") " pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.582121 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-393d-account-create-update-dqnvt"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.583275 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.585219 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.587674 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.596434 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-393d-account-create-update-dqnvt"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.634073 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-922ck\" (UniqueName: \"kubernetes.io/projected/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-kube-api-access-922ck\") pod \"nova-cell0-393d-account-create-update-dqnvt\" (UID: \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\") " pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.634125 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-operator-scripts\") pod \"nova-cell0-393d-account-create-update-dqnvt\" (UID: \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\") " pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.634156 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/000f7332-5032-4668-8151-d5235db27f97-operator-scripts\") pod \"nova-cell1-db-create-z9xt4\" (UID: \"000f7332-5032-4668-8151-d5235db27f97\") " pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.634232 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm4w6\" (UniqueName: \"kubernetes.io/projected/000f7332-5032-4668-8151-d5235db27f97-kube-api-access-nm4w6\") pod \"nova-cell1-db-create-z9xt4\" (UID: \"000f7332-5032-4668-8151-d5235db27f97\") " pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.635780 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/000f7332-5032-4668-8151-d5235db27f97-operator-scripts\") pod \"nova-cell1-db-create-z9xt4\" (UID: \"000f7332-5032-4668-8151-d5235db27f97\") " pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.677518 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm4w6\" (UniqueName: \"kubernetes.io/projected/000f7332-5032-4668-8151-d5235db27f97-kube-api-access-nm4w6\") pod \"nova-cell1-db-create-z9xt4\" (UID: \"000f7332-5032-4668-8151-d5235db27f97\") " pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.686937 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.720533 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.735274 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-922ck\" (UniqueName: \"kubernetes.io/projected/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-kube-api-access-922ck\") pod \"nova-cell0-393d-account-create-update-dqnvt\" (UID: \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\") " pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.735337 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-operator-scripts\") pod \"nova-cell0-393d-account-create-update-dqnvt\" (UID: \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\") " pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.736091 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-operator-scripts\") pod \"nova-cell0-393d-account-create-update-dqnvt\" (UID: \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\") " pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.773344 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-922ck\" (UniqueName: \"kubernetes.io/projected/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-kube-api-access-922ck\") pod \"nova-cell0-393d-account-create-update-dqnvt\" (UID: \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\") " pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.779177 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9578-account-create-update-x8g7j"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.780369 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.787024 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.804507 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9578-account-create-update-x8g7j"] Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.813731 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.836915 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94500516-6967-409c-8197-2c93f110b9e7-operator-scripts\") pod \"nova-cell1-9578-account-create-update-x8g7j\" (UID: \"94500516-6967-409c-8197-2c93f110b9e7\") " pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.836969 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zktp\" (UniqueName: \"kubernetes.io/projected/94500516-6967-409c-8197-2c93f110b9e7-kube-api-access-7zktp\") pod \"nova-cell1-9578-account-create-update-x8g7j\" (UID: \"94500516-6967-409c-8197-2c93f110b9e7\") " pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.939037 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94500516-6967-409c-8197-2c93f110b9e7-operator-scripts\") pod \"nova-cell1-9578-account-create-update-x8g7j\" (UID: \"94500516-6967-409c-8197-2c93f110b9e7\") " pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.939094 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zktp\" (UniqueName: \"kubernetes.io/projected/94500516-6967-409c-8197-2c93f110b9e7-kube-api-access-7zktp\") pod \"nova-cell1-9578-account-create-update-x8g7j\" (UID: \"94500516-6967-409c-8197-2c93f110b9e7\") " pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.939634 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94500516-6967-409c-8197-2c93f110b9e7-operator-scripts\") pod \"nova-cell1-9578-account-create-update-x8g7j\" (UID: \"94500516-6967-409c-8197-2c93f110b9e7\") " pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.956978 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:03 crc kubenswrapper[4851]: I0223 13:28:03.957621 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zktp\" (UniqueName: \"kubernetes.io/projected/94500516-6967-409c-8197-2c93f110b9e7-kube-api-access-7zktp\") pod \"nova-cell1-9578-account-create-update-x8g7j\" (UID: \"94500516-6967-409c-8197-2c93f110b9e7\") " pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.110637 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.499793 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.500035 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="55c5d815-2740-4a04-aba6-b030687b69bb" containerName="glance-log" containerID="cri-o://2dd1343c1ba57065aa742f3d9e8b7ef9d19ed64ecbcf7380aec50b0007b0ec73" gracePeriod=30 Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.500105 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="55c5d815-2740-4a04-aba6-b030687b69bb" containerName="glance-httpd" containerID="cri-o://71705e6de5619295269d61be1381c7af0f139a8bd398b834405b849c007f3961" gracePeriod=30 Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.533499 4851 scope.go:117] "RemoveContainer" containerID="b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.673419 4851 scope.go:117] "RemoveContainer" containerID="08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.780404 4851 scope.go:117] "RemoveContainer" containerID="00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.846045 4851 scope.go:117] "RemoveContainer" containerID="ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4" Feb 23 13:28:04 crc kubenswrapper[4851]: E0223 13:28:04.847745 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4\": container with ID starting with ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4 not found: ID does not exist" containerID="ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.847807 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4"} err="failed to get container status \"ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4\": rpc error: code = NotFound desc = could not find container \"ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4\": container with ID starting with ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4 not found: ID does not exist" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.847841 4851 scope.go:117] "RemoveContainer" containerID="b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93" Feb 23 13:28:04 crc kubenswrapper[4851]: E0223 13:28:04.848450 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93\": container with ID starting with b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93 not found: ID does not exist" containerID="b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.848471 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93"} err="failed to get container status \"b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93\": rpc error: code = NotFound desc = could not find container \"b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93\": container with ID starting with b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93 not found: ID does not exist" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.848488 4851 scope.go:117] "RemoveContainer" containerID="08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2" Feb 23 13:28:04 crc kubenswrapper[4851]: E0223 13:28:04.848867 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2\": container with ID starting with 08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2 not found: ID does not exist" containerID="08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.848904 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2"} err="failed to get container status \"08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2\": rpc error: code = NotFound desc = could not find container \"08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2\": container with ID starting with 08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2 not found: ID does not exist" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.848960 4851 scope.go:117] "RemoveContainer" containerID="00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181" Feb 23 13:28:04 crc kubenswrapper[4851]: E0223 13:28:04.849230 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181\": container with ID starting with 00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181 not found: ID does not exist" containerID="00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.849255 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181"} err="failed to get container status \"00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181\": rpc error: code = NotFound desc = could not find container \"00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181\": container with ID starting with 00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181 not found: ID does not exist" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.849270 4851 scope.go:117] "RemoveContainer" containerID="ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.849560 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4"} err="failed to get container status \"ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4\": rpc error: code = NotFound desc = could not find container \"ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4\": container with ID starting with ada9de3cdfe4cbfb8058253fe0e6d30b5352e9e8d6bf969e14b46f6b06ec1fa4 not found: ID does not exist" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.849580 4851 scope.go:117] "RemoveContainer" containerID="b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.849960 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93"} err="failed to get container status \"b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93\": rpc error: code = NotFound desc = could not find container \"b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93\": container with ID starting with b813e11ab1a53ef45edb3ff07b15348229fce7a1d73eae293934185e52c55f93 not found: ID does not exist" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.849977 4851 scope.go:117] "RemoveContainer" containerID="08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.850254 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2"} err="failed to get container status \"08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2\": rpc error: code = NotFound desc = could not find container \"08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2\": container with ID starting with 08c6ad3f62dbec2913f841ac59cbd68cbaf23ac586b87e1cb172c539b208bfb2 not found: ID does not exist" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.850271 4851 scope.go:117] "RemoveContainer" containerID="00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181" Feb 23 13:28:04 crc kubenswrapper[4851]: I0223 13:28:04.853046 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181"} err="failed to get container status \"00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181\": rpc error: code = NotFound desc = could not find container \"00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181\": container with ID starting with 00257a661b9adf7fa4623680fc4b3ecbab84f2fdcf4377f877c3cee512469181 not found: ID does not exist" Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.017561 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"86b670f3-6886-4a48-b0ec-a109e93c87a0","Type":"ContainerStarted","Data":"67e7fd29b6708cf005f218a85d8f96ec89fe21e6c2a6d0f5ec6559b93ef6a7e3"} Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.021647 4851 generic.go:334] "Generic (PLEG): container finished" podID="55c5d815-2740-4a04-aba6-b030687b69bb" containerID="2dd1343c1ba57065aa742f3d9e8b7ef9d19ed64ecbcf7380aec50b0007b0ec73" exitCode=143 Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.021676 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55c5d815-2740-4a04-aba6-b030687b69bb","Type":"ContainerDied","Data":"2dd1343c1ba57065aa742f3d9e8b7ef9d19ed64ecbcf7380aec50b0007b0ec73"} Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.042841 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.5624252840000001 podStartE2EDuration="13.042825793s" podCreationTimestamp="2026-02-23 13:27:52 +0000 UTC" firstStartedPulling="2026-02-23 13:27:53.19235519 +0000 UTC m=+1227.874058868" lastFinishedPulling="2026-02-23 13:28:04.672755699 +0000 UTC m=+1239.354459377" observedRunningTime="2026-02-23 13:28:05.038250475 +0000 UTC m=+1239.719954153" watchObservedRunningTime="2026-02-23 13:28:05.042825793 +0000 UTC m=+1239.724529471" Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.166283 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-393d-account-create-update-dqnvt"] Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.306096 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-fsh9d"] Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.315547 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.380452 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-sntrg"] Feb 23 13:28:05 crc kubenswrapper[4851]: W0223 13:28:05.382402 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9dbd21e_9573_4caa_83d3_ed709dc66748.slice/crio-c8e21e44373d2e9e99acfcbbe931027e69b1025bde428abf655ee07c24f7be13 WatchSource:0}: Error finding container c8e21e44373d2e9e99acfcbbe931027e69b1025bde428abf655ee07c24f7be13: Status 404 returned error can't find the container with id c8e21e44373d2e9e99acfcbbe931027e69b1025bde428abf655ee07c24f7be13 Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.419569 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9578-account-create-update-x8g7j"] Feb 23 13:28:05 crc kubenswrapper[4851]: W0223 13:28:05.421695 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94500516_6967_409c_8197_2c93f110b9e7.slice/crio-b77c4d52e8ffcace5481588ffee9178d9b14af6be38d87a73e86dfec98329692 WatchSource:0}: Error finding container b77c4d52e8ffcace5481588ffee9178d9b14af6be38d87a73e86dfec98329692: Status 404 returned error can't find the container with id b77c4d52e8ffcace5481588ffee9178d9b14af6be38d87a73e86dfec98329692 Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.470620 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7a31-account-create-update-j7zdq"] Feb 23 13:28:05 crc kubenswrapper[4851]: I0223 13:28:05.497985 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z9xt4"] Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.031154 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7a31-account-create-update-j7zdq" event={"ID":"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d","Type":"ContainerStarted","Data":"93bedb3285bebb32ce651d3cbb9c27d569622a9d414168d437e3a43b54c8253a"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.032931 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7a31-account-create-update-j7zdq" event={"ID":"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d","Type":"ContainerStarted","Data":"afbdf040279e8df2aa8d87142f9c0356846f08a5314c6605955ea30c3bff3e53"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.033580 4851 generic.go:334] "Generic (PLEG): container finished" podID="cbbe43f7-4c90-40e2-8127-b8e13a3b7656" containerID="b72bd1ccb44177c2da0a0e88fbd4bae054a1a192761e5ee13c464f8cfd2a8ee5" exitCode=0 Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.033670 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-393d-account-create-update-dqnvt" event={"ID":"cbbe43f7-4c90-40e2-8127-b8e13a3b7656","Type":"ContainerDied","Data":"b72bd1ccb44177c2da0a0e88fbd4bae054a1a192761e5ee13c464f8cfd2a8ee5"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.033709 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-393d-account-create-update-dqnvt" event={"ID":"cbbe43f7-4c90-40e2-8127-b8e13a3b7656","Type":"ContainerStarted","Data":"af4d327237837b66611262c9806a6288605cb61b245ed82fbf78ea8cd904ede4"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.035294 4851 generic.go:334] "Generic (PLEG): container finished" podID="86804859-3d9d-4fd8-9b36-f74c751d795f" containerID="c3552dad3047de770dc11a9668914cf9db582265ccd8d0dcb50050246e81d454" exitCode=0 Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.035443 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fsh9d" event={"ID":"86804859-3d9d-4fd8-9b36-f74c751d795f","Type":"ContainerDied","Data":"c3552dad3047de770dc11a9668914cf9db582265ccd8d0dcb50050246e81d454"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.035467 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fsh9d" event={"ID":"86804859-3d9d-4fd8-9b36-f74c751d795f","Type":"ContainerStarted","Data":"ef464d9eabab320cc81975e4dfb47f4d5a11514b3cec6a9c83786601e647c5e3"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.036621 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerStarted","Data":"83b0dc1b34dfae4d984d2d8215af2e784795c3e4e38201c25848b099a94456c9"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.037981 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sntrg" event={"ID":"e9dbd21e-9573-4caa-83d3-ed709dc66748","Type":"ContainerStarted","Data":"0430524c47c7b3c26a76a228d3e69f0a775dce6089fd1c1eec597a8b0101390f"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.038020 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sntrg" event={"ID":"e9dbd21e-9573-4caa-83d3-ed709dc66748","Type":"ContainerStarted","Data":"c8e21e44373d2e9e99acfcbbe931027e69b1025bde428abf655ee07c24f7be13"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.040250 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9578-account-create-update-x8g7j" event={"ID":"94500516-6967-409c-8197-2c93f110b9e7","Type":"ContainerStarted","Data":"1be81a45d8222334125bc708b571d676e774b72b73b12dba378b576eb022a89f"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.040399 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9578-account-create-update-x8g7j" event={"ID":"94500516-6967-409c-8197-2c93f110b9e7","Type":"ContainerStarted","Data":"b77c4d52e8ffcace5481588ffee9178d9b14af6be38d87a73e86dfec98329692"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.041758 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z9xt4" event={"ID":"000f7332-5032-4668-8151-d5235db27f97","Type":"ContainerStarted","Data":"4a04b9fce1974cfe25418462f7cefe5e703ac87a53dde466b3df87a3bdbb1cfe"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.041819 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z9xt4" event={"ID":"000f7332-5032-4668-8151-d5235db27f97","Type":"ContainerStarted","Data":"6126cc5cc3ec8c024b0ac515887cf3e3115d8184d4edeae35a481fbda9ba75ee"} Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.133359 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-7a31-account-create-update-j7zdq" podStartSLOduration=3.133311143 podStartE2EDuration="3.133311143s" podCreationTimestamp="2026-02-23 13:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:06.131585215 +0000 UTC m=+1240.813288913" watchObservedRunningTime="2026-02-23 13:28:06.133311143 +0000 UTC m=+1240.815014821" Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.178622 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-sntrg" podStartSLOduration=3.178598279 podStartE2EDuration="3.178598279s" podCreationTimestamp="2026-02-23 13:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:06.151862532 +0000 UTC m=+1240.833566220" watchObservedRunningTime="2026-02-23 13:28:06.178598279 +0000 UTC m=+1240.860301967" Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.216585 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-z9xt4" podStartSLOduration=3.21656932 podStartE2EDuration="3.21656932s" podCreationTimestamp="2026-02-23 13:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:06.167399526 +0000 UTC m=+1240.849103204" watchObservedRunningTime="2026-02-23 13:28:06.21656932 +0000 UTC m=+1240.898272988" Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.231029 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-9578-account-create-update-x8g7j" podStartSLOduration=3.2310103039999998 podStartE2EDuration="3.231010304s" podCreationTimestamp="2026-02-23 13:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:06.183608129 +0000 UTC m=+1240.865311807" watchObservedRunningTime="2026-02-23 13:28:06.231010304 +0000 UTC m=+1240.912713982" Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.849032 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.881807 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.882852 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5ccf5dc859-8drcp" Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.967176 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.970322 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="154ab902-181d-479d-b449-acc94531a235" containerName="glance-log" containerID="cri-o://bcac84db92448afeab719f76638f5c39c4fab233d29f104717d59e00f4f4d196" gracePeriod=30 Feb 23 13:28:06 crc kubenswrapper[4851]: I0223 13:28:06.970716 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="154ab902-181d-479d-b449-acc94531a235" containerName="glance-httpd" containerID="cri-o://021f6b22de043f9421aa3387e408457b15cb1a439560806440749678a33ad98a" gracePeriod=30 Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.053446 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerStarted","Data":"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73"} Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.053754 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerStarted","Data":"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34"} Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.055348 4851 generic.go:334] "Generic (PLEG): container finished" podID="e9dbd21e-9573-4caa-83d3-ed709dc66748" containerID="0430524c47c7b3c26a76a228d3e69f0a775dce6089fd1c1eec597a8b0101390f" exitCode=0 Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.055410 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sntrg" event={"ID":"e9dbd21e-9573-4caa-83d3-ed709dc66748","Type":"ContainerDied","Data":"0430524c47c7b3c26a76a228d3e69f0a775dce6089fd1c1eec597a8b0101390f"} Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.061554 4851 generic.go:334] "Generic (PLEG): container finished" podID="94500516-6967-409c-8197-2c93f110b9e7" containerID="1be81a45d8222334125bc708b571d676e774b72b73b12dba378b576eb022a89f" exitCode=0 Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.061723 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9578-account-create-update-x8g7j" event={"ID":"94500516-6967-409c-8197-2c93f110b9e7","Type":"ContainerDied","Data":"1be81a45d8222334125bc708b571d676e774b72b73b12dba378b576eb022a89f"} Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.115130 4851 generic.go:334] "Generic (PLEG): container finished" podID="000f7332-5032-4668-8151-d5235db27f97" containerID="4a04b9fce1974cfe25418462f7cefe5e703ac87a53dde466b3df87a3bdbb1cfe" exitCode=0 Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.115211 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z9xt4" event={"ID":"000f7332-5032-4668-8151-d5235db27f97","Type":"ContainerDied","Data":"4a04b9fce1974cfe25418462f7cefe5e703ac87a53dde466b3df87a3bdbb1cfe"} Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.128537 4851 generic.go:334] "Generic (PLEG): container finished" podID="6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d" containerID="93bedb3285bebb32ce651d3cbb9c27d569622a9d414168d437e3a43b54c8253a" exitCode=0 Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.128732 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7a31-account-create-update-j7zdq" event={"ID":"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d","Type":"ContainerDied","Data":"93bedb3285bebb32ce651d3cbb9c27d569622a9d414168d437e3a43b54c8253a"} Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.761653 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.765636 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.819103 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-operator-scripts\") pod \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\" (UID: \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\") " Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.819207 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfpkq\" (UniqueName: \"kubernetes.io/projected/86804859-3d9d-4fd8-9b36-f74c751d795f-kube-api-access-qfpkq\") pod \"86804859-3d9d-4fd8-9b36-f74c751d795f\" (UID: \"86804859-3d9d-4fd8-9b36-f74c751d795f\") " Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.819307 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86804859-3d9d-4fd8-9b36-f74c751d795f-operator-scripts\") pod \"86804859-3d9d-4fd8-9b36-f74c751d795f\" (UID: \"86804859-3d9d-4fd8-9b36-f74c751d795f\") " Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.819380 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-922ck\" (UniqueName: \"kubernetes.io/projected/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-kube-api-access-922ck\") pod \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\" (UID: \"cbbe43f7-4c90-40e2-8127-b8e13a3b7656\") " Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.820860 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbbe43f7-4c90-40e2-8127-b8e13a3b7656" (UID: "cbbe43f7-4c90-40e2-8127-b8e13a3b7656"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.821236 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86804859-3d9d-4fd8-9b36-f74c751d795f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86804859-3d9d-4fd8-9b36-f74c751d795f" (UID: "86804859-3d9d-4fd8-9b36-f74c751d795f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.830578 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86804859-3d9d-4fd8-9b36-f74c751d795f-kube-api-access-qfpkq" (OuterVolumeSpecName: "kube-api-access-qfpkq") pod "86804859-3d9d-4fd8-9b36-f74c751d795f" (UID: "86804859-3d9d-4fd8-9b36-f74c751d795f"). InnerVolumeSpecName "kube-api-access-qfpkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.834493 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-kube-api-access-922ck" (OuterVolumeSpecName: "kube-api-access-922ck") pod "cbbe43f7-4c90-40e2-8127-b8e13a3b7656" (UID: "cbbe43f7-4c90-40e2-8127-b8e13a3b7656"). InnerVolumeSpecName "kube-api-access-922ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.921515 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-922ck\" (UniqueName: \"kubernetes.io/projected/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-kube-api-access-922ck\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.921544 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbbe43f7-4c90-40e2-8127-b8e13a3b7656-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.921555 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfpkq\" (UniqueName: \"kubernetes.io/projected/86804859-3d9d-4fd8-9b36-f74c751d795f-kube-api-access-qfpkq\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:07 crc kubenswrapper[4851]: I0223 13:28:07.921563 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86804859-3d9d-4fd8-9b36-f74c751d795f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.137111 4851 generic.go:334] "Generic (PLEG): container finished" podID="55c5d815-2740-4a04-aba6-b030687b69bb" containerID="71705e6de5619295269d61be1381c7af0f139a8bd398b834405b849c007f3961" exitCode=0 Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.137172 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55c5d815-2740-4a04-aba6-b030687b69bb","Type":"ContainerDied","Data":"71705e6de5619295269d61be1381c7af0f139a8bd398b834405b849c007f3961"} Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.137537 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"55c5d815-2740-4a04-aba6-b030687b69bb","Type":"ContainerDied","Data":"da7c189540def8c362aff026d32261da7d2b3ac45d6d4a284ef7a0073250b678"} Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.137556 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da7c189540def8c362aff026d32261da7d2b3ac45d6d4a284ef7a0073250b678" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.137411 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.139491 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerStarted","Data":"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137"} Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.141774 4851 generic.go:334] "Generic (PLEG): container finished" podID="154ab902-181d-479d-b449-acc94531a235" containerID="bcac84db92448afeab719f76638f5c39c4fab233d29f104717d59e00f4f4d196" exitCode=143 Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.141838 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"154ab902-181d-479d-b449-acc94531a235","Type":"ContainerDied","Data":"bcac84db92448afeab719f76638f5c39c4fab233d29f104717d59e00f4f4d196"} Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.143409 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-393d-account-create-update-dqnvt" event={"ID":"cbbe43f7-4c90-40e2-8127-b8e13a3b7656","Type":"ContainerDied","Data":"af4d327237837b66611262c9806a6288605cb61b245ed82fbf78ea8cd904ede4"} Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.143440 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af4d327237837b66611262c9806a6288605cb61b245ed82fbf78ea8cd904ede4" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.143480 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-393d-account-create-update-dqnvt" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.147066 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-fsh9d" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.147074 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-fsh9d" event={"ID":"86804859-3d9d-4fd8-9b36-f74c751d795f","Type":"ContainerDied","Data":"ef464d9eabab320cc81975e4dfb47f4d5a11514b3cec6a9c83786601e647c5e3"} Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.147365 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef464d9eabab320cc81975e4dfb47f4d5a11514b3cec6a9c83786601e647c5e3" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.227721 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-scripts\") pod \"55c5d815-2740-4a04-aba6-b030687b69bb\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.227766 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-public-tls-certs\") pod \"55c5d815-2740-4a04-aba6-b030687b69bb\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.227880 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-combined-ca-bundle\") pod \"55c5d815-2740-4a04-aba6-b030687b69bb\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.227906 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-logs\") pod \"55c5d815-2740-4a04-aba6-b030687b69bb\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.227959 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-httpd-run\") pod \"55c5d815-2740-4a04-aba6-b030687b69bb\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.228049 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"55c5d815-2740-4a04-aba6-b030687b69bb\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.228086 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-config-data\") pod \"55c5d815-2740-4a04-aba6-b030687b69bb\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.228150 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj9b2\" (UniqueName: \"kubernetes.io/projected/55c5d815-2740-4a04-aba6-b030687b69bb-kube-api-access-bj9b2\") pod \"55c5d815-2740-4a04-aba6-b030687b69bb\" (UID: \"55c5d815-2740-4a04-aba6-b030687b69bb\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.228644 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "55c5d815-2740-4a04-aba6-b030687b69bb" (UID: "55c5d815-2740-4a04-aba6-b030687b69bb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.228772 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-logs" (OuterVolumeSpecName: "logs") pod "55c5d815-2740-4a04-aba6-b030687b69bb" (UID: "55c5d815-2740-4a04-aba6-b030687b69bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.232570 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-scripts" (OuterVolumeSpecName: "scripts") pod "55c5d815-2740-4a04-aba6-b030687b69bb" (UID: "55c5d815-2740-4a04-aba6-b030687b69bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.234606 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "55c5d815-2740-4a04-aba6-b030687b69bb" (UID: "55c5d815-2740-4a04-aba6-b030687b69bb"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.237142 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c5d815-2740-4a04-aba6-b030687b69bb-kube-api-access-bj9b2" (OuterVolumeSpecName: "kube-api-access-bj9b2") pod "55c5d815-2740-4a04-aba6-b030687b69bb" (UID: "55c5d815-2740-4a04-aba6-b030687b69bb"). InnerVolumeSpecName "kube-api-access-bj9b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.305162 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55c5d815-2740-4a04-aba6-b030687b69bb" (UID: "55c5d815-2740-4a04-aba6-b030687b69bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.331119 4851 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.331164 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj9b2\" (UniqueName: \"kubernetes.io/projected/55c5d815-2740-4a04-aba6-b030687b69bb-kube-api-access-bj9b2\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.331179 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.331192 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.331203 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.331213 4851 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/55c5d815-2740-4a04-aba6-b030687b69bb-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.337005 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-config-data" (OuterVolumeSpecName: "config-data") pod "55c5d815-2740-4a04-aba6-b030687b69bb" (UID: "55c5d815-2740-4a04-aba6-b030687b69bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.352784 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "55c5d815-2740-4a04-aba6-b030687b69bb" (UID: "55c5d815-2740-4a04-aba6-b030687b69bb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.364482 4851 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.433495 4851 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.433535 4851 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.433545 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55c5d815-2740-4a04-aba6-b030687b69bb-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.701115 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.739891 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/000f7332-5032-4668-8151-d5235db27f97-operator-scripts\") pod \"000f7332-5032-4668-8151-d5235db27f97\" (UID: \"000f7332-5032-4668-8151-d5235db27f97\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.740087 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm4w6\" (UniqueName: \"kubernetes.io/projected/000f7332-5032-4668-8151-d5235db27f97-kube-api-access-nm4w6\") pod \"000f7332-5032-4668-8151-d5235db27f97\" (UID: \"000f7332-5032-4668-8151-d5235db27f97\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.740663 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000f7332-5032-4668-8151-d5235db27f97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "000f7332-5032-4668-8151-d5235db27f97" (UID: "000f7332-5032-4668-8151-d5235db27f97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.746551 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000f7332-5032-4668-8151-d5235db27f97-kube-api-access-nm4w6" (OuterVolumeSpecName: "kube-api-access-nm4w6") pod "000f7332-5032-4668-8151-d5235db27f97" (UID: "000f7332-5032-4668-8151-d5235db27f97"). InnerVolumeSpecName "kube-api-access-nm4w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.842469 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/000f7332-5032-4668-8151-d5235db27f97-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.842503 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm4w6\" (UniqueName: \"kubernetes.io/projected/000f7332-5032-4668-8151-d5235db27f97-kube-api-access-nm4w6\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.884599 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.893616 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.905285 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.943224 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj6qk\" (UniqueName: \"kubernetes.io/projected/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-kube-api-access-tj6qk\") pod \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\" (UID: \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.943427 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dbd21e-9573-4caa-83d3-ed709dc66748-operator-scripts\") pod \"e9dbd21e-9573-4caa-83d3-ed709dc66748\" (UID: \"e9dbd21e-9573-4caa-83d3-ed709dc66748\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.943465 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94500516-6967-409c-8197-2c93f110b9e7-operator-scripts\") pod \"94500516-6967-409c-8197-2c93f110b9e7\" (UID: \"94500516-6967-409c-8197-2c93f110b9e7\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.943509 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd8xh\" (UniqueName: \"kubernetes.io/projected/e9dbd21e-9573-4caa-83d3-ed709dc66748-kube-api-access-qd8xh\") pod \"e9dbd21e-9573-4caa-83d3-ed709dc66748\" (UID: \"e9dbd21e-9573-4caa-83d3-ed709dc66748\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.943551 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zktp\" (UniqueName: \"kubernetes.io/projected/94500516-6967-409c-8197-2c93f110b9e7-kube-api-access-7zktp\") pod \"94500516-6967-409c-8197-2c93f110b9e7\" (UID: \"94500516-6967-409c-8197-2c93f110b9e7\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.943593 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-operator-scripts\") pod \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\" (UID: \"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d\") " Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.943947 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94500516-6967-409c-8197-2c93f110b9e7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94500516-6967-409c-8197-2c93f110b9e7" (UID: "94500516-6967-409c-8197-2c93f110b9e7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.944234 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94500516-6967-409c-8197-2c93f110b9e7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.944363 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d" (UID: "6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.945001 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9dbd21e-9573-4caa-83d3-ed709dc66748-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9dbd21e-9573-4caa-83d3-ed709dc66748" (UID: "e9dbd21e-9573-4caa-83d3-ed709dc66748"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.947254 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9dbd21e-9573-4caa-83d3-ed709dc66748-kube-api-access-qd8xh" (OuterVolumeSpecName: "kube-api-access-qd8xh") pod "e9dbd21e-9573-4caa-83d3-ed709dc66748" (UID: "e9dbd21e-9573-4caa-83d3-ed709dc66748"). InnerVolumeSpecName "kube-api-access-qd8xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.947846 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94500516-6967-409c-8197-2c93f110b9e7-kube-api-access-7zktp" (OuterVolumeSpecName: "kube-api-access-7zktp") pod "94500516-6967-409c-8197-2c93f110b9e7" (UID: "94500516-6967-409c-8197-2c93f110b9e7"). InnerVolumeSpecName "kube-api-access-7zktp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:08 crc kubenswrapper[4851]: I0223 13:28:08.947891 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-kube-api-access-tj6qk" (OuterVolumeSpecName: "kube-api-access-tj6qk") pod "6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d" (UID: "6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d"). InnerVolumeSpecName "kube-api-access-tj6qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.048197 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd8xh\" (UniqueName: \"kubernetes.io/projected/e9dbd21e-9573-4caa-83d3-ed709dc66748-kube-api-access-qd8xh\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.048239 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zktp\" (UniqueName: \"kubernetes.io/projected/94500516-6967-409c-8197-2c93f110b9e7-kube-api-access-7zktp\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.048254 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.048268 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj6qk\" (UniqueName: \"kubernetes.io/projected/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d-kube-api-access-tj6qk\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.048279 4851 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9dbd21e-9573-4caa-83d3-ed709dc66748-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.161263 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-sntrg" event={"ID":"e9dbd21e-9573-4caa-83d3-ed709dc66748","Type":"ContainerDied","Data":"c8e21e44373d2e9e99acfcbbe931027e69b1025bde428abf655ee07c24f7be13"} Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.161303 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8e21e44373d2e9e99acfcbbe931027e69b1025bde428abf655ee07c24f7be13" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.161388 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-sntrg" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.168095 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9578-account-create-update-x8g7j" event={"ID":"94500516-6967-409c-8197-2c93f110b9e7","Type":"ContainerDied","Data":"b77c4d52e8ffcace5481588ffee9178d9b14af6be38d87a73e86dfec98329692"} Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.168125 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9578-account-create-update-x8g7j" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.168140 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b77c4d52e8ffcace5481588ffee9178d9b14af6be38d87a73e86dfec98329692" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.170777 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z9xt4" event={"ID":"000f7332-5032-4668-8151-d5235db27f97","Type":"ContainerDied","Data":"6126cc5cc3ec8c024b0ac515887cf3e3115d8184d4edeae35a481fbda9ba75ee"} Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.170821 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6126cc5cc3ec8c024b0ac515887cf3e3115d8184d4edeae35a481fbda9ba75ee" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.170882 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z9xt4" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.179526 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.179526 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7a31-account-create-update-j7zdq" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.179526 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7a31-account-create-update-j7zdq" event={"ID":"6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d","Type":"ContainerDied","Data":"afbdf040279e8df2aa8d87142f9c0356846f08a5314c6605955ea30c3bff3e53"} Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.179575 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afbdf040279e8df2aa8d87142f9c0356846f08a5314c6605955ea30c3bff3e53" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.248422 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.277523 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299055 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:28:09 crc kubenswrapper[4851]: E0223 13:28:09.299481 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d" containerName="mariadb-account-create-update" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299499 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d" containerName="mariadb-account-create-update" Feb 23 13:28:09 crc kubenswrapper[4851]: E0223 13:28:09.299512 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c5d815-2740-4a04-aba6-b030687b69bb" containerName="glance-httpd" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299518 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c5d815-2740-4a04-aba6-b030687b69bb" containerName="glance-httpd" Feb 23 13:28:09 crc kubenswrapper[4851]: E0223 13:28:09.299533 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000f7332-5032-4668-8151-d5235db27f97" containerName="mariadb-database-create" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299541 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="000f7332-5032-4668-8151-d5235db27f97" containerName="mariadb-database-create" Feb 23 13:28:09 crc kubenswrapper[4851]: E0223 13:28:09.299555 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c5d815-2740-4a04-aba6-b030687b69bb" containerName="glance-log" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299561 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c5d815-2740-4a04-aba6-b030687b69bb" containerName="glance-log" Feb 23 13:28:09 crc kubenswrapper[4851]: E0223 13:28:09.299571 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9dbd21e-9573-4caa-83d3-ed709dc66748" containerName="mariadb-database-create" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299577 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9dbd21e-9573-4caa-83d3-ed709dc66748" containerName="mariadb-database-create" Feb 23 13:28:09 crc kubenswrapper[4851]: E0223 13:28:09.299586 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbbe43f7-4c90-40e2-8127-b8e13a3b7656" containerName="mariadb-account-create-update" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299592 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbbe43f7-4c90-40e2-8127-b8e13a3b7656" containerName="mariadb-account-create-update" Feb 23 13:28:09 crc kubenswrapper[4851]: E0223 13:28:09.299609 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86804859-3d9d-4fd8-9b36-f74c751d795f" containerName="mariadb-database-create" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299616 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="86804859-3d9d-4fd8-9b36-f74c751d795f" containerName="mariadb-database-create" Feb 23 13:28:09 crc kubenswrapper[4851]: E0223 13:28:09.299628 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94500516-6967-409c-8197-2c93f110b9e7" containerName="mariadb-account-create-update" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299633 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="94500516-6967-409c-8197-2c93f110b9e7" containerName="mariadb-account-create-update" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299816 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbbe43f7-4c90-40e2-8127-b8e13a3b7656" containerName="mariadb-account-create-update" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299836 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="86804859-3d9d-4fd8-9b36-f74c751d795f" containerName="mariadb-database-create" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299857 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d" containerName="mariadb-account-create-update" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299869 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c5d815-2740-4a04-aba6-b030687b69bb" containerName="glance-httpd" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299880 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="94500516-6967-409c-8197-2c93f110b9e7" containerName="mariadb-account-create-update" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299892 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9dbd21e-9573-4caa-83d3-ed709dc66748" containerName="mariadb-database-create" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299907 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c5d815-2740-4a04-aba6-b030687b69bb" containerName="glance-log" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.299917 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="000f7332-5032-4668-8151-d5235db27f97" containerName="mariadb-database-create" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.303115 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.305749 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.306944 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.307088 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.359146 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.359205 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-scripts\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.359296 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.359364 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.359398 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/839d4518-f84b-4a2c-81eb-c0112da70e71-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.359432 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8jfs\" (UniqueName: \"kubernetes.io/projected/839d4518-f84b-4a2c-81eb-c0112da70e71-kube-api-access-j8jfs\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.359491 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-config-data\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.359558 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839d4518-f84b-4a2c-81eb-c0112da70e71-logs\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.464253 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839d4518-f84b-4a2c-81eb-c0112da70e71-logs\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.464649 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.464679 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-scripts\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.464712 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.464739 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.464761 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/839d4518-f84b-4a2c-81eb-c0112da70e71-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.464784 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8jfs\" (UniqueName: \"kubernetes.io/projected/839d4518-f84b-4a2c-81eb-c0112da70e71-kube-api-access-j8jfs\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.464823 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-config-data\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.465127 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/839d4518-f84b-4a2c-81eb-c0112da70e71-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.465164 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.465452 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/839d4518-f84b-4a2c-81eb-c0112da70e71-logs\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.470820 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-scripts\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.471399 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.472189 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.483301 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/839d4518-f84b-4a2c-81eb-c0112da70e71-config-data\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.485224 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8jfs\" (UniqueName: \"kubernetes.io/projected/839d4518-f84b-4a2c-81eb-c0112da70e71-kube-api-access-j8jfs\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.494387 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"839d4518-f84b-4a2c-81eb-c0112da70e71\") " pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.641836 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 13:28:09 crc kubenswrapper[4851]: I0223 13:28:09.981517 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c5d815-2740-4a04-aba6-b030687b69bb" path="/var/lib/kubelet/pods/55c5d815-2740-4a04-aba6-b030687b69bb/volumes" Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.185302 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.197234 4851 generic.go:334] "Generic (PLEG): container finished" podID="154ab902-181d-479d-b449-acc94531a235" containerID="021f6b22de043f9421aa3387e408457b15cb1a439560806440749678a33ad98a" exitCode=0 Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.197303 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"154ab902-181d-479d-b449-acc94531a235","Type":"ContainerDied","Data":"021f6b22de043f9421aa3387e408457b15cb1a439560806440749678a33ad98a"} Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.200354 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerStarted","Data":"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed"} Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.200529 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.200527 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="ceilometer-central-agent" containerID="cri-o://e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34" gracePeriod=30 Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.200595 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="sg-core" containerID="cri-o://f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137" gracePeriod=30 Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.200551 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="proxy-httpd" containerID="cri-o://0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed" gracePeriod=30 Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.200575 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="ceilometer-notification-agent" containerID="cri-o://8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73" gracePeriod=30 Feb 23 13:28:10 crc kubenswrapper[4851]: W0223 13:28:10.237862 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod839d4518_f84b_4a2c_81eb_c0112da70e71.slice/crio-ce401dacb21b47794058802457fa445e5de15879b0ab9993cee132f4016db1fc WatchSource:0}: Error finding container ce401dacb21b47794058802457fa445e5de15879b0ab9993cee132f4016db1fc: Status 404 returned error can't find the container with id ce401dacb21b47794058802457fa445e5de15879b0ab9993cee132f4016db1fc Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.898702 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:10 crc kubenswrapper[4851]: I0223 13:28:10.925057 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.129321241 podStartE2EDuration="12.925037674s" podCreationTimestamp="2026-02-23 13:27:58 +0000 UTC" firstStartedPulling="2026-02-23 13:28:05.372825907 +0000 UTC m=+1240.054529585" lastFinishedPulling="2026-02-23 13:28:09.16854234 +0000 UTC m=+1243.850246018" observedRunningTime="2026-02-23 13:28:10.228915108 +0000 UTC m=+1244.910618816" watchObservedRunningTime="2026-02-23 13:28:10.925037674 +0000 UTC m=+1245.606741352" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.002864 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-combined-ca-bundle\") pod \"154ab902-181d-479d-b449-acc94531a235\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.002905 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjx8m\" (UniqueName: \"kubernetes.io/projected/154ab902-181d-479d-b449-acc94531a235-kube-api-access-vjx8m\") pod \"154ab902-181d-479d-b449-acc94531a235\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.002982 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-config-data\") pod \"154ab902-181d-479d-b449-acc94531a235\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.003019 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-internal-tls-certs\") pod \"154ab902-181d-479d-b449-acc94531a235\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.003047 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-logs\") pod \"154ab902-181d-479d-b449-acc94531a235\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.003107 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-httpd-run\") pod \"154ab902-181d-479d-b449-acc94531a235\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.003134 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-scripts\") pod \"154ab902-181d-479d-b449-acc94531a235\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.003241 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"154ab902-181d-479d-b449-acc94531a235\" (UID: \"154ab902-181d-479d-b449-acc94531a235\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.005352 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-logs" (OuterVolumeSpecName: "logs") pod "154ab902-181d-479d-b449-acc94531a235" (UID: "154ab902-181d-479d-b449-acc94531a235"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.008991 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/154ab902-181d-479d-b449-acc94531a235-kube-api-access-vjx8m" (OuterVolumeSpecName: "kube-api-access-vjx8m") pod "154ab902-181d-479d-b449-acc94531a235" (UID: "154ab902-181d-479d-b449-acc94531a235"). InnerVolumeSpecName "kube-api-access-vjx8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.011638 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "154ab902-181d-479d-b449-acc94531a235" (UID: "154ab902-181d-479d-b449-acc94531a235"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.015697 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-scripts" (OuterVolumeSpecName: "scripts") pod "154ab902-181d-479d-b449-acc94531a235" (UID: "154ab902-181d-479d-b449-acc94531a235"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.021693 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "154ab902-181d-479d-b449-acc94531a235" (UID: "154ab902-181d-479d-b449-acc94531a235"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.040769 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "154ab902-181d-479d-b449-acc94531a235" (UID: "154ab902-181d-479d-b449-acc94531a235"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.082602 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-config-data" (OuterVolumeSpecName: "config-data") pod "154ab902-181d-479d-b449-acc94531a235" (UID: "154ab902-181d-479d-b449-acc94531a235"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.105151 4851 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.105181 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.105192 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjx8m\" (UniqueName: \"kubernetes.io/projected/154ab902-181d-479d-b449-acc94531a235-kube-api-access-vjx8m\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.105201 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.105210 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.105217 4851 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/154ab902-181d-479d-b449-acc94531a235-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.105226 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.107861 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.108829 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "154ab902-181d-479d-b449-acc94531a235" (UID: "154ab902-181d-479d-b449-acc94531a235"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.115687 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7bd58878f7-xhsz6" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.155223 4851 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.207493 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-config-data\") pod \"1bf281ab-4f4c-447f-8b99-eee662974a92\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.207581 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-scripts\") pod \"1bf281ab-4f4c-447f-8b99-eee662974a92\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.207620 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/1bf281ab-4f4c-447f-8b99-eee662974a92-kube-api-access-xx5tc\") pod \"1bf281ab-4f4c-447f-8b99-eee662974a92\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.207646 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-run-httpd\") pod \"1bf281ab-4f4c-447f-8b99-eee662974a92\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.207730 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-log-httpd\") pod \"1bf281ab-4f4c-447f-8b99-eee662974a92\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.207790 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-combined-ca-bundle\") pod \"1bf281ab-4f4c-447f-8b99-eee662974a92\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.207825 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-sg-core-conf-yaml\") pod \"1bf281ab-4f4c-447f-8b99-eee662974a92\" (UID: \"1bf281ab-4f4c-447f-8b99-eee662974a92\") " Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.208723 4851 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.208742 4851 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/154ab902-181d-479d-b449-acc94531a235-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.209288 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1bf281ab-4f4c-447f-8b99-eee662974a92" (UID: "1bf281ab-4f4c-447f-8b99-eee662974a92"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.209589 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1bf281ab-4f4c-447f-8b99-eee662974a92" (UID: "1bf281ab-4f4c-447f-8b99-eee662974a92"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.218682 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f5f47d7dd-d76bg"] Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.219007 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f5f47d7dd-d76bg" podUID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerName="neutron-api" containerID="cri-o://9768cd599cd3d06879c56fe10af928af5abb04a302fa94a6f1919d0d955266fe" gracePeriod=30 Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.219168 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f5f47d7dd-d76bg" podUID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerName="neutron-httpd" containerID="cri-o://96ed0095741921188a8d41746ad066a62c194e8a164dea434629ddc034da5e22" gracePeriod=30 Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.252859 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-scripts" (OuterVolumeSpecName: "scripts") pod "1bf281ab-4f4c-447f-8b99-eee662974a92" (UID: "1bf281ab-4f4c-447f-8b99-eee662974a92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.254620 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf281ab-4f4c-447f-8b99-eee662974a92-kube-api-access-xx5tc" (OuterVolumeSpecName: "kube-api-access-xx5tc") pod "1bf281ab-4f4c-447f-8b99-eee662974a92" (UID: "1bf281ab-4f4c-447f-8b99-eee662974a92"). InnerVolumeSpecName "kube-api-access-xx5tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.284634 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"154ab902-181d-479d-b449-acc94531a235","Type":"ContainerDied","Data":"0ad0539ca4a93dd44bceffea8b7fe9519cba5ff9a1737ea0cd849ce056e01bcd"} Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.284683 4851 scope.go:117] "RemoveContainer" containerID="021f6b22de043f9421aa3387e408457b15cb1a439560806440749678a33ad98a" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.284814 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.288724 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1bf281ab-4f4c-447f-8b99-eee662974a92" (UID: "1bf281ab-4f4c-447f-8b99-eee662974a92"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.301843 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"839d4518-f84b-4a2c-81eb-c0112da70e71","Type":"ContainerStarted","Data":"6ad26f5464c27ffa3aa0151c948ca20192a7d079b0f89667a17c0af483512e33"} Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.302814 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"839d4518-f84b-4a2c-81eb-c0112da70e71","Type":"ContainerStarted","Data":"ce401dacb21b47794058802457fa445e5de15879b0ab9993cee132f4016db1fc"} Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.310263 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.310300 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/1bf281ab-4f4c-447f-8b99-eee662974a92-kube-api-access-xx5tc\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.310317 4851 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.310347 4851 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bf281ab-4f4c-447f-8b99-eee662974a92-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.310360 4851 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313392 4851 generic.go:334] "Generic (PLEG): container finished" podID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerID="0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed" exitCode=0 Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313439 4851 generic.go:334] "Generic (PLEG): container finished" podID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerID="f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137" exitCode=2 Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313450 4851 generic.go:334] "Generic (PLEG): container finished" podID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerID="8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73" exitCode=0 Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313459 4851 generic.go:334] "Generic (PLEG): container finished" podID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerID="e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34" exitCode=0 Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313481 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerDied","Data":"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed"} Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313514 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerDied","Data":"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137"} Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313528 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerDied","Data":"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73"} Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313539 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerDied","Data":"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34"} Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313551 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bf281ab-4f4c-447f-8b99-eee662974a92","Type":"ContainerDied","Data":"83b0dc1b34dfae4d984d2d8215af2e784795c3e4e38201c25848b099a94456c9"} Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.313629 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.326641 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.350540 4851 scope.go:117] "RemoveContainer" containerID="bcac84db92448afeab719f76638f5c39c4fab233d29f104717d59e00f4f4d196" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.354412 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.384230 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bf281ab-4f4c-447f-8b99-eee662974a92" (UID: "1bf281ab-4f4c-447f-8b99-eee662974a92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.393348 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.393665 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154ab902-181d-479d-b449-acc94531a235" containerName="glance-httpd" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.393681 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="154ab902-181d-479d-b449-acc94531a235" containerName="glance-httpd" Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.393695 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="proxy-httpd" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.393702 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="proxy-httpd" Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.393715 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154ab902-181d-479d-b449-acc94531a235" containerName="glance-log" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.393721 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="154ab902-181d-479d-b449-acc94531a235" containerName="glance-log" Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.393731 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="ceilometer-notification-agent" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.393737 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="ceilometer-notification-agent" Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.393745 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="sg-core" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.393751 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="sg-core" Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.393764 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="ceilometer-central-agent" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.393771 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="ceilometer-central-agent" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.394579 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="sg-core" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.394603 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="proxy-httpd" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.394611 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="154ab902-181d-479d-b449-acc94531a235" containerName="glance-httpd" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.394626 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="ceilometer-notification-agent" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.394635 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" containerName="ceilometer-central-agent" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.394645 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="154ab902-181d-479d-b449-acc94531a235" containerName="glance-log" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.395538 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.397656 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.402889 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.412878 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.424770 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.435724 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-config-data" (OuterVolumeSpecName: "config-data") pod "1bf281ab-4f4c-447f-8b99-eee662974a92" (UID: "1bf281ab-4f4c-447f-8b99-eee662974a92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.440495 4851 scope.go:117] "RemoveContainer" containerID="0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.508632 4851 scope.go:117] "RemoveContainer" containerID="f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.515200 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv7vb\" (UniqueName: \"kubernetes.io/projected/17103db4-b198-4896-8bec-1e1d1bf8efa1-kube-api-access-jv7vb\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.515266 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17103db4-b198-4896-8bec-1e1d1bf8efa1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.515349 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.515380 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.515398 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.515420 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17103db4-b198-4896-8bec-1e1d1bf8efa1-logs\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.515461 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.515478 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.515556 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bf281ab-4f4c-447f-8b99-eee662974a92-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.552715 4851 scope.go:117] "RemoveContainer" containerID="8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.603960 4851 scope.go:117] "RemoveContainer" containerID="e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.616592 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.616638 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.616676 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17103db4-b198-4896-8bec-1e1d1bf8efa1-logs\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.616730 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.616754 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.616841 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv7vb\" (UniqueName: \"kubernetes.io/projected/17103db4-b198-4896-8bec-1e1d1bf8efa1-kube-api-access-jv7vb\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.616877 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17103db4-b198-4896-8bec-1e1d1bf8efa1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.616942 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.617678 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17103db4-b198-4896-8bec-1e1d1bf8efa1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.617895 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17103db4-b198-4896-8bec-1e1d1bf8efa1-logs\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.617970 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.624454 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.624773 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.625321 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.633781 4851 scope.go:117] "RemoveContainer" containerID="0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.634691 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17103db4-b198-4896-8bec-1e1d1bf8efa1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.637037 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed\": container with ID starting with 0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed not found: ID does not exist" containerID="0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.637072 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed"} err="failed to get container status \"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed\": rpc error: code = NotFound desc = could not find container \"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed\": container with ID starting with 0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.637113 4851 scope.go:117] "RemoveContainer" containerID="f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.642948 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv7vb\" (UniqueName: \"kubernetes.io/projected/17103db4-b198-4896-8bec-1e1d1bf8efa1-kube-api-access-jv7vb\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.644588 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137\": container with ID starting with f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137 not found: ID does not exist" containerID="f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.644637 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137"} err="failed to get container status \"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137\": rpc error: code = NotFound desc = could not find container \"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137\": container with ID starting with f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.644667 4851 scope.go:117] "RemoveContainer" containerID="8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73" Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.645078 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73\": container with ID starting with 8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73 not found: ID does not exist" containerID="8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.645143 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73"} err="failed to get container status \"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73\": rpc error: code = NotFound desc = could not find container \"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73\": container with ID starting with 8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.645162 4851 scope.go:117] "RemoveContainer" containerID="e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34" Feb 23 13:28:11 crc kubenswrapper[4851]: E0223 13:28:11.645977 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34\": container with ID starting with e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34 not found: ID does not exist" containerID="e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.646018 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34"} err="failed to get container status \"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34\": rpc error: code = NotFound desc = could not find container \"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34\": container with ID starting with e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.646035 4851 scope.go:117] "RemoveContainer" containerID="0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.656517 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed"} err="failed to get container status \"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed\": rpc error: code = NotFound desc = could not find container \"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed\": container with ID starting with 0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.656764 4851 scope.go:117] "RemoveContainer" containerID="f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.657746 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137"} err="failed to get container status \"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137\": rpc error: code = NotFound desc = could not find container \"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137\": container with ID starting with f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.657796 4851 scope.go:117] "RemoveContainer" containerID="8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.658200 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73"} err="failed to get container status \"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73\": rpc error: code = NotFound desc = could not find container \"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73\": container with ID starting with 8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.658237 4851 scope.go:117] "RemoveContainer" containerID="e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.658583 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34"} err="failed to get container status \"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34\": rpc error: code = NotFound desc = could not find container \"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34\": container with ID starting with e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.658620 4851 scope.go:117] "RemoveContainer" containerID="0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.658853 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed"} err="failed to get container status \"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed\": rpc error: code = NotFound desc = could not find container \"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed\": container with ID starting with 0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.658889 4851 scope.go:117] "RemoveContainer" containerID="f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.659161 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137"} err="failed to get container status \"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137\": rpc error: code = NotFound desc = could not find container \"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137\": container with ID starting with f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.659205 4851 scope.go:117] "RemoveContainer" containerID="8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.662294 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73"} err="failed to get container status \"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73\": rpc error: code = NotFound desc = could not find container \"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73\": container with ID starting with 8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.662436 4851 scope.go:117] "RemoveContainer" containerID="e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.663052 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34"} err="failed to get container status \"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34\": rpc error: code = NotFound desc = could not find container \"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34\": container with ID starting with e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.663158 4851 scope.go:117] "RemoveContainer" containerID="0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.664227 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"17103db4-b198-4896-8bec-1e1d1bf8efa1\") " pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.664404 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed"} err="failed to get container status \"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed\": rpc error: code = NotFound desc = could not find container \"0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed\": container with ID starting with 0f716790643fe8a1e8ed7799e58cf8cc9d1437d41ce3626254ea0cbab55f46ed not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.664481 4851 scope.go:117] "RemoveContainer" containerID="f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.664824 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137"} err="failed to get container status \"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137\": rpc error: code = NotFound desc = could not find container \"f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137\": container with ID starting with f7ec7a3e262763ca4716d93b22e3fb9309aa078994a090dd42edcdebfa87c137 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.664857 4851 scope.go:117] "RemoveContainer" containerID="8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.665403 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73"} err="failed to get container status \"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73\": rpc error: code = NotFound desc = could not find container \"8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73\": container with ID starting with 8e8da80d6ff094a3470cd0f5b431f2d0989735d96828866bd11933cfae995c73 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.665422 4851 scope.go:117] "RemoveContainer" containerID="e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.665749 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34"} err="failed to get container status \"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34\": rpc error: code = NotFound desc = could not find container \"e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34\": container with ID starting with e653c20da6c8cfb72e1305d6526b0daefb01b16e6372cee5cf19a696582adb34 not found: ID does not exist" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.729749 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.795553 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.813387 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.823729 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.826527 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.838420 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.838812 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.859784 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.928305 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7n8c\" (UniqueName: \"kubernetes.io/projected/8114df66-f2df-4a18-84ba-a828a41175dd-kube-api-access-s7n8c\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.928615 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.928669 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-log-httpd\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.928704 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-config-data\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.928739 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.928807 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-run-httpd\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.928906 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-scripts\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.994292 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="154ab902-181d-479d-b449-acc94531a235" path="/var/lib/kubelet/pods/154ab902-181d-479d-b449-acc94531a235/volumes" Feb 23 13:28:11 crc kubenswrapper[4851]: I0223 13:28:11.995213 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf281ab-4f4c-447f-8b99-eee662974a92" path="/var/lib/kubelet/pods/1bf281ab-4f4c-447f-8b99-eee662974a92/volumes" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.030125 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7n8c\" (UniqueName: \"kubernetes.io/projected/8114df66-f2df-4a18-84ba-a828a41175dd-kube-api-access-s7n8c\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.030199 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.030246 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-log-httpd\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.030287 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-config-data\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.030304 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.030371 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-run-httpd\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.030392 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-scripts\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.033248 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-log-httpd\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.033483 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-run-httpd\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.035435 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.037322 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-config-data\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.039798 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.042887 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-scripts\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.047587 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7n8c\" (UniqueName: \"kubernetes.io/projected/8114df66-f2df-4a18-84ba-a828a41175dd-kube-api-access-s7n8c\") pod \"ceilometer-0\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.141720 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69f9fbd4d-lldd8" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.199403 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.332047 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"839d4518-f84b-4a2c-81eb-c0112da70e71","Type":"ContainerStarted","Data":"25bc6812c3e7acdb35e5aa1a27ead6eedf129211cae4df98707950bc4af0fc37"} Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.341222 4851 generic.go:334] "Generic (PLEG): container finished" podID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerID="96ed0095741921188a8d41746ad066a62c194e8a164dea434629ddc034da5e22" exitCode=0 Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.341274 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f5f47d7dd-d76bg" event={"ID":"2a5eaa08-375b-4738-9b4c-0440dffbd7bf","Type":"ContainerDied","Data":"96ed0095741921188a8d41746ad066a62c194e8a164dea434629ddc034da5e22"} Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.364581 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.36456036 podStartE2EDuration="3.36456036s" podCreationTimestamp="2026-02-23 13:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:12.350678712 +0000 UTC m=+1247.032382390" watchObservedRunningTime="2026-02-23 13:28:12.36456036 +0000 UTC m=+1247.046264038" Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.397915 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 13:28:12 crc kubenswrapper[4851]: I0223 13:28:12.694509 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:13 crc kubenswrapper[4851]: I0223 13:28:13.364501 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"17103db4-b198-4896-8bec-1e1d1bf8efa1","Type":"ContainerStarted","Data":"8baecfac703cf2f9346cf7e38e1c05528e6986659e01a149c718d04593bc04e7"} Feb 23 13:28:13 crc kubenswrapper[4851]: I0223 13:28:13.364867 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"17103db4-b198-4896-8bec-1e1d1bf8efa1","Type":"ContainerStarted","Data":"e3a425a4f96ee3e7c7f52cc080905e0dfe353d43a04cd99b19d305fd2d70085b"} Feb 23 13:28:13 crc kubenswrapper[4851]: I0223 13:28:13.373701 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerStarted","Data":"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc"} Feb 23 13:28:13 crc kubenswrapper[4851]: I0223 13:28:13.373782 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerStarted","Data":"7d489b80ba996cf60cdf2de837dda57a7b31dd20678da1cdc17eb9cf2c5f6a4d"} Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.024455 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-96s8v"] Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.025990 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.027992 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wldbk" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.028195 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.032782 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.033937 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-96s8v"] Feb 23 13:28:14 crc kubenswrapper[4851]: E0223 13:28:14.080950 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a5eaa08_375b_4738_9b4c_0440dffbd7bf.slice/crio-9768cd599cd3d06879c56fe10af928af5abb04a302fa94a6f1919d0d955266fe.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.171496 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rml6w\" (UniqueName: \"kubernetes.io/projected/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-kube-api-access-rml6w\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.171571 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-config-data\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.171595 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-scripts\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.171615 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.273485 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-config-data\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.273558 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-scripts\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.273605 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.273786 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rml6w\" (UniqueName: \"kubernetes.io/projected/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-kube-api-access-rml6w\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.283571 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.296638 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-config-data\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.299776 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rml6w\" (UniqueName: \"kubernetes.io/projected/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-kube-api-access-rml6w\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.305539 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-scripts\") pod \"nova-cell0-conductor-db-sync-96s8v\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.345317 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.386211 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerStarted","Data":"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a"} Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.389209 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"17103db4-b198-4896-8bec-1e1d1bf8efa1","Type":"ContainerStarted","Data":"06f976aadfb60f8fcf51b747fa9fadeaa74d82dd6977c56dd1f0f435f5697ad1"} Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.396637 4851 generic.go:334] "Generic (PLEG): container finished" podID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerID="9768cd599cd3d06879c56fe10af928af5abb04a302fa94a6f1919d0d955266fe" exitCode=0 Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.396688 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f5f47d7dd-d76bg" event={"ID":"2a5eaa08-375b-4738-9b4c-0440dffbd7bf","Type":"ContainerDied","Data":"9768cd599cd3d06879c56fe10af928af5abb04a302fa94a6f1919d0d955266fe"} Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.415917 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.415899626 podStartE2EDuration="3.415899626s" podCreationTimestamp="2026-02-23 13:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:14.41103171 +0000 UTC m=+1249.092735398" watchObservedRunningTime="2026-02-23 13:28:14.415899626 +0000 UTC m=+1249.097603304" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.635994 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.789887 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27wgm\" (UniqueName: \"kubernetes.io/projected/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-kube-api-access-27wgm\") pod \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.790028 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-ovndb-tls-certs\") pod \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.790124 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-combined-ca-bundle\") pod \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.790528 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-httpd-config\") pod \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.790580 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-config\") pod \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\" (UID: \"2a5eaa08-375b-4738-9b4c-0440dffbd7bf\") " Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.795110 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2a5eaa08-375b-4738-9b4c-0440dffbd7bf" (UID: "2a5eaa08-375b-4738-9b4c-0440dffbd7bf"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.796829 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-kube-api-access-27wgm" (OuterVolumeSpecName: "kube-api-access-27wgm") pod "2a5eaa08-375b-4738-9b4c-0440dffbd7bf" (UID: "2a5eaa08-375b-4738-9b4c-0440dffbd7bf"). InnerVolumeSpecName "kube-api-access-27wgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.837999 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-config" (OuterVolumeSpecName: "config") pod "2a5eaa08-375b-4738-9b4c-0440dffbd7bf" (UID: "2a5eaa08-375b-4738-9b4c-0440dffbd7bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.862431 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a5eaa08-375b-4738-9b4c-0440dffbd7bf" (UID: "2a5eaa08-375b-4738-9b4c-0440dffbd7bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.870730 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2a5eaa08-375b-4738-9b4c-0440dffbd7bf" (UID: "2a5eaa08-375b-4738-9b4c-0440dffbd7bf"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.874311 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-96s8v"] Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.893532 4851 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.893571 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.893585 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27wgm\" (UniqueName: \"kubernetes.io/projected/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-kube-api-access-27wgm\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.893601 4851 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.893613 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5eaa08-375b-4738-9b4c-0440dffbd7bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:14 crc kubenswrapper[4851]: I0223 13:28:14.895040 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:15 crc kubenswrapper[4851]: I0223 13:28:15.409022 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f5f47d7dd-d76bg" event={"ID":"2a5eaa08-375b-4738-9b4c-0440dffbd7bf","Type":"ContainerDied","Data":"4d4e935faaa42fb4e0c51c84dd12ab746dd3106a95aa6fb30e0cb9868f930f6c"} Feb 23 13:28:15 crc kubenswrapper[4851]: I0223 13:28:15.409056 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f5f47d7dd-d76bg" Feb 23 13:28:15 crc kubenswrapper[4851]: I0223 13:28:15.409082 4851 scope.go:117] "RemoveContainer" containerID="96ed0095741921188a8d41746ad066a62c194e8a164dea434629ddc034da5e22" Feb 23 13:28:15 crc kubenswrapper[4851]: I0223 13:28:15.413458 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-96s8v" event={"ID":"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5","Type":"ContainerStarted","Data":"46ee25e257a9947a9e86d507fbdf311bfad1732ba183e0381043ed44555d9a73"} Feb 23 13:28:15 crc kubenswrapper[4851]: I0223 13:28:15.424236 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerStarted","Data":"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875"} Feb 23 13:28:15 crc kubenswrapper[4851]: I0223 13:28:15.442929 4851 scope.go:117] "RemoveContainer" containerID="9768cd599cd3d06879c56fe10af928af5abb04a302fa94a6f1919d0d955266fe" Feb 23 13:28:15 crc kubenswrapper[4851]: I0223 13:28:15.444205 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f5f47d7dd-d76bg"] Feb 23 13:28:15 crc kubenswrapper[4851]: I0223 13:28:15.453085 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7f5f47d7dd-d76bg"] Feb 23 13:28:15 crc kubenswrapper[4851]: I0223 13:28:15.979109 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" path="/var/lib/kubelet/pods/2a5eaa08-375b-4738-9b4c-0440dffbd7bf/volumes" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.414811 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.459971 4851 generic.go:334] "Generic (PLEG): container finished" podID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerID="292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4" exitCode=137 Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.460031 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f9fbd4d-lldd8" event={"ID":"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32","Type":"ContainerDied","Data":"292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4"} Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.460055 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69f9fbd4d-lldd8" event={"ID":"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32","Type":"ContainerDied","Data":"38acd9dd181607e86aa8c306f4837b75174349bff0e376b5aa0c30750acf7eb4"} Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.460070 4851 scope.go:117] "RemoveContainer" containerID="f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.460180 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69f9fbd4d-lldd8" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.478945 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerStarted","Data":"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed"} Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.479695 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.479745 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="ceilometer-central-agent" containerID="cri-o://62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc" gracePeriod=30 Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.479820 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="proxy-httpd" containerID="cri-o://e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed" gracePeriod=30 Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.479854 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="sg-core" containerID="cri-o://b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875" gracePeriod=30 Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.479888 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="ceilometer-notification-agent" containerID="cri-o://a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a" gracePeriod=30 Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.529798 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.734204303 podStartE2EDuration="6.529773742s" podCreationTimestamp="2026-02-23 13:28:11 +0000 UTC" firstStartedPulling="2026-02-23 13:28:12.699033989 +0000 UTC m=+1247.380737667" lastFinishedPulling="2026-02-23 13:28:16.494603428 +0000 UTC m=+1251.176307106" observedRunningTime="2026-02-23 13:28:17.522391426 +0000 UTC m=+1252.204095124" watchObservedRunningTime="2026-02-23 13:28:17.529773742 +0000 UTC m=+1252.211477440" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.550104 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-combined-ca-bundle\") pod \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.550187 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-tls-certs\") pod \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.550223 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-scripts\") pod \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.550249 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-secret-key\") pod \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.550291 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdjg6\" (UniqueName: \"kubernetes.io/projected/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-kube-api-access-pdjg6\") pod \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.550319 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-logs\") pod \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.550364 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-config-data\") pod \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\" (UID: \"7c4b27e4-3b24-4ca1-a209-dfc5468b5e32\") " Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.551853 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-logs" (OuterVolumeSpecName: "logs") pod "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" (UID: "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.576285 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-kube-api-access-pdjg6" (OuterVolumeSpecName: "kube-api-access-pdjg6") pod "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" (UID: "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32"). InnerVolumeSpecName "kube-api-access-pdjg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.598310 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-config-data" (OuterVolumeSpecName: "config-data") pod "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" (UID: "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.599997 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" (UID: "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.601032 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-scripts" (OuterVolumeSpecName: "scripts") pod "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" (UID: "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.652004 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.652033 4851 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.652044 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdjg6\" (UniqueName: \"kubernetes.io/projected/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-kube-api-access-pdjg6\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.652055 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.652063 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.658485 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" (UID: "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.658499 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" (UID: "7c4b27e4-3b24-4ca1-a209-dfc5468b5e32"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.706472 4851 scope.go:117] "RemoveContainer" containerID="292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.745890 4851 scope.go:117] "RemoveContainer" containerID="f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41" Feb 23 13:28:17 crc kubenswrapper[4851]: E0223 13:28:17.746418 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41\": container with ID starting with f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41 not found: ID does not exist" containerID="f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.746479 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41"} err="failed to get container status \"f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41\": rpc error: code = NotFound desc = could not find container \"f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41\": container with ID starting with f18ea4f7b4806a5097f6bd33a1b0555ac408aa773fc9ef25f0a858dcadc4ae41 not found: ID does not exist" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.746503 4851 scope.go:117] "RemoveContainer" containerID="292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4" Feb 23 13:28:17 crc kubenswrapper[4851]: E0223 13:28:17.747215 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4\": container with ID starting with 292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4 not found: ID does not exist" containerID="292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.747249 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4"} err="failed to get container status \"292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4\": rpc error: code = NotFound desc = could not find container \"292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4\": container with ID starting with 292819d93835f7bad1bfbbf832060bb727fe5f0fc131f5c6b7f1ef829e78ebc4 not found: ID does not exist" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.753522 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.753638 4851 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.793556 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69f9fbd4d-lldd8"] Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.801286 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69f9fbd4d-lldd8"] Feb 23 13:28:17 crc kubenswrapper[4851]: I0223 13:28:17.979238 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" path="/var/lib/kubelet/pods/7c4b27e4-3b24-4ca1-a209-dfc5468b5e32/volumes" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.341889 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.470424 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-combined-ca-bundle\") pod \"8114df66-f2df-4a18-84ba-a828a41175dd\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.470770 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7n8c\" (UniqueName: \"kubernetes.io/projected/8114df66-f2df-4a18-84ba-a828a41175dd-kube-api-access-s7n8c\") pod \"8114df66-f2df-4a18-84ba-a828a41175dd\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.471414 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-sg-core-conf-yaml\") pod \"8114df66-f2df-4a18-84ba-a828a41175dd\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.471437 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-config-data\") pod \"8114df66-f2df-4a18-84ba-a828a41175dd\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.471477 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-scripts\") pod \"8114df66-f2df-4a18-84ba-a828a41175dd\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.471497 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-run-httpd\") pod \"8114df66-f2df-4a18-84ba-a828a41175dd\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.471560 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-log-httpd\") pod \"8114df66-f2df-4a18-84ba-a828a41175dd\" (UID: \"8114df66-f2df-4a18-84ba-a828a41175dd\") " Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.472263 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8114df66-f2df-4a18-84ba-a828a41175dd" (UID: "8114df66-f2df-4a18-84ba-a828a41175dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.472470 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8114df66-f2df-4a18-84ba-a828a41175dd" (UID: "8114df66-f2df-4a18-84ba-a828a41175dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.476488 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8114df66-f2df-4a18-84ba-a828a41175dd-kube-api-access-s7n8c" (OuterVolumeSpecName: "kube-api-access-s7n8c") pod "8114df66-f2df-4a18-84ba-a828a41175dd" (UID: "8114df66-f2df-4a18-84ba-a828a41175dd"). InnerVolumeSpecName "kube-api-access-s7n8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.476517 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-scripts" (OuterVolumeSpecName: "scripts") pod "8114df66-f2df-4a18-84ba-a828a41175dd" (UID: "8114df66-f2df-4a18-84ba-a828a41175dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495189 4851 generic.go:334] "Generic (PLEG): container finished" podID="8114df66-f2df-4a18-84ba-a828a41175dd" containerID="e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed" exitCode=0 Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495295 4851 generic.go:334] "Generic (PLEG): container finished" podID="8114df66-f2df-4a18-84ba-a828a41175dd" containerID="b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875" exitCode=2 Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495305 4851 generic.go:334] "Generic (PLEG): container finished" podID="8114df66-f2df-4a18-84ba-a828a41175dd" containerID="a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a" exitCode=0 Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495387 4851 generic.go:334] "Generic (PLEG): container finished" podID="8114df66-f2df-4a18-84ba-a828a41175dd" containerID="62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc" exitCode=0 Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495253 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerDied","Data":"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed"} Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495467 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerDied","Data":"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875"} Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495545 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerDied","Data":"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a"} Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495556 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerDied","Data":"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc"} Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495564 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8114df66-f2df-4a18-84ba-a828a41175dd","Type":"ContainerDied","Data":"7d489b80ba996cf60cdf2de837dda57a7b31dd20678da1cdc17eb9cf2c5f6a4d"} Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495582 4851 scope.go:117] "RemoveContainer" containerID="e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.495277 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.521627 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8114df66-f2df-4a18-84ba-a828a41175dd" (UID: "8114df66-f2df-4a18-84ba-a828a41175dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.563532 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8114df66-f2df-4a18-84ba-a828a41175dd" (UID: "8114df66-f2df-4a18-84ba-a828a41175dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.581372 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.581399 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7n8c\" (UniqueName: \"kubernetes.io/projected/8114df66-f2df-4a18-84ba-a828a41175dd-kube-api-access-s7n8c\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.581412 4851 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.581420 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.581429 4851 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.581436 4851 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8114df66-f2df-4a18-84ba-a828a41175dd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.614340 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-config-data" (OuterVolumeSpecName: "config-data") pod "8114df66-f2df-4a18-84ba-a828a41175dd" (UID: "8114df66-f2df-4a18-84ba-a828a41175dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.682784 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8114df66-f2df-4a18-84ba-a828a41175dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.825613 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.844149 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.875860 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:18 crc kubenswrapper[4851]: E0223 13:28:18.878629 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="ceilometer-central-agent" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.878745 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="ceilometer-central-agent" Feb 23 13:28:18 crc kubenswrapper[4851]: E0223 13:28:18.878850 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="proxy-httpd" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.878912 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="proxy-httpd" Feb 23 13:28:18 crc kubenswrapper[4851]: E0223 13:28:18.879002 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerName="neutron-api" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.879059 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerName="neutron-api" Feb 23 13:28:18 crc kubenswrapper[4851]: E0223 13:28:18.879700 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="sg-core" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.879796 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="sg-core" Feb 23 13:28:18 crc kubenswrapper[4851]: E0223 13:28:18.879883 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon-log" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.879951 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon-log" Feb 23 13:28:18 crc kubenswrapper[4851]: E0223 13:28:18.880050 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="ceilometer-notification-agent" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.880121 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="ceilometer-notification-agent" Feb 23 13:28:18 crc kubenswrapper[4851]: E0223 13:28:18.880231 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.880302 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" Feb 23 13:28:18 crc kubenswrapper[4851]: E0223 13:28:18.880404 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerName="neutron-httpd" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.880476 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerName="neutron-httpd" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.881380 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerName="neutron-httpd" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.881489 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="ceilometer-notification-agent" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.881574 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon-log" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.881667 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="proxy-httpd" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.881792 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="ceilometer-central-agent" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.881823 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5eaa08-375b-4738-9b4c-0440dffbd7bf" containerName="neutron-api" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.881851 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" containerName="sg-core" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.881867 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c4b27e4-3b24-4ca1-a209-dfc5468b5e32" containerName="horizon" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.899288 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.904820 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.908784 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 13:28:18 crc kubenswrapper[4851]: I0223 13:28:18.908977 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.001911 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d67px\" (UniqueName: \"kubernetes.io/projected/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-kube-api-access-d67px\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.002021 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.002063 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-scripts\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.002098 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-run-httpd\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.002135 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.002165 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-config-data\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.002197 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-log-httpd\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.103680 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-log-httpd\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.103760 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d67px\" (UniqueName: \"kubernetes.io/projected/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-kube-api-access-d67px\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.103858 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.103887 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-scripts\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.103912 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-run-httpd\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.103940 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.103964 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-config-data\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.104620 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-log-httpd\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.104819 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-run-httpd\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.109468 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.109500 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-scripts\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.110047 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-config-data\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.114021 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.123914 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d67px\" (UniqueName: \"kubernetes.io/projected/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-kube-api-access-d67px\") pod \"ceilometer-0\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.221712 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.642847 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.642894 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.686857 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.700309 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 13:28:19 crc kubenswrapper[4851]: I0223 13:28:19.984617 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8114df66-f2df-4a18-84ba-a828a41175dd" path="/var/lib/kubelet/pods/8114df66-f2df-4a18-84ba-a828a41175dd/volumes" Feb 23 13:28:20 crc kubenswrapper[4851]: I0223 13:28:20.533157 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 13:28:20 crc kubenswrapper[4851]: I0223 13:28:20.533199 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 13:28:21 crc kubenswrapper[4851]: I0223 13:28:21.730928 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:21 crc kubenswrapper[4851]: I0223 13:28:21.731254 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:21 crc kubenswrapper[4851]: I0223 13:28:21.770782 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:21 crc kubenswrapper[4851]: I0223 13:28:21.786635 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:22 crc kubenswrapper[4851]: I0223 13:28:22.402989 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 13:28:22 crc kubenswrapper[4851]: I0223 13:28:22.449840 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 13:28:22 crc kubenswrapper[4851]: I0223 13:28:22.558425 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:22 crc kubenswrapper[4851]: I0223 13:28:22.558464 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.480286 4851 scope.go:117] "RemoveContainer" containerID="b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.747468 4851 scope.go:117] "RemoveContainer" containerID="a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.789942 4851 scope.go:117] "RemoveContainer" containerID="62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.830118 4851 scope.go:117] "RemoveContainer" containerID="e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed" Feb 23 13:28:23 crc kubenswrapper[4851]: E0223 13:28:23.830762 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed\": container with ID starting with e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed not found: ID does not exist" containerID="e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.830792 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed"} err="failed to get container status \"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed\": rpc error: code = NotFound desc = could not find container \"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed\": container with ID starting with e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.830811 4851 scope.go:117] "RemoveContainer" containerID="b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875" Feb 23 13:28:23 crc kubenswrapper[4851]: E0223 13:28:23.831401 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875\": container with ID starting with b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875 not found: ID does not exist" containerID="b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.831423 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875"} err="failed to get container status \"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875\": rpc error: code = NotFound desc = could not find container \"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875\": container with ID starting with b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875 not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.831440 4851 scope.go:117] "RemoveContainer" containerID="a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a" Feb 23 13:28:23 crc kubenswrapper[4851]: E0223 13:28:23.831688 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a\": container with ID starting with a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a not found: ID does not exist" containerID="a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.831709 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a"} err="failed to get container status \"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a\": rpc error: code = NotFound desc = could not find container \"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a\": container with ID starting with a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.831722 4851 scope.go:117] "RemoveContainer" containerID="62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc" Feb 23 13:28:23 crc kubenswrapper[4851]: E0223 13:28:23.831933 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc\": container with ID starting with 62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc not found: ID does not exist" containerID="62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.831950 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc"} err="failed to get container status \"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc\": rpc error: code = NotFound desc = could not find container \"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc\": container with ID starting with 62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.831965 4851 scope.go:117] "RemoveContainer" containerID="e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.832212 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed"} err="failed to get container status \"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed\": rpc error: code = NotFound desc = could not find container \"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed\": container with ID starting with e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.832227 4851 scope.go:117] "RemoveContainer" containerID="b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.834785 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875"} err="failed to get container status \"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875\": rpc error: code = NotFound desc = could not find container \"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875\": container with ID starting with b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875 not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.834808 4851 scope.go:117] "RemoveContainer" containerID="a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.835100 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a"} err="failed to get container status \"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a\": rpc error: code = NotFound desc = could not find container \"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a\": container with ID starting with a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.835120 4851 scope.go:117] "RemoveContainer" containerID="62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.836262 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc"} err="failed to get container status \"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc\": rpc error: code = NotFound desc = could not find container \"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc\": container with ID starting with 62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.836285 4851 scope.go:117] "RemoveContainer" containerID="e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.836619 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed"} err="failed to get container status \"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed\": rpc error: code = NotFound desc = could not find container \"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed\": container with ID starting with e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.836652 4851 scope.go:117] "RemoveContainer" containerID="b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.845535 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875"} err="failed to get container status \"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875\": rpc error: code = NotFound desc = could not find container \"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875\": container with ID starting with b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875 not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.845582 4851 scope.go:117] "RemoveContainer" containerID="a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.846493 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a"} err="failed to get container status \"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a\": rpc error: code = NotFound desc = could not find container \"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a\": container with ID starting with a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.846515 4851 scope.go:117] "RemoveContainer" containerID="62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.846779 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc"} err="failed to get container status \"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc\": rpc error: code = NotFound desc = could not find container \"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc\": container with ID starting with 62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.846802 4851 scope.go:117] "RemoveContainer" containerID="e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.847054 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed"} err="failed to get container status \"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed\": rpc error: code = NotFound desc = could not find container \"e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed\": container with ID starting with e2e39f3f0b42f9483818331b3e94728ceb1d83d5c0f5087eb69306965438eeed not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.847079 4851 scope.go:117] "RemoveContainer" containerID="b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.847254 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875"} err="failed to get container status \"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875\": rpc error: code = NotFound desc = could not find container \"b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875\": container with ID starting with b0e038babc18e8389a15dfa1d42801b247ad6a317f851cdf76f9b983dac88875 not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.847271 4851 scope.go:117] "RemoveContainer" containerID="a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.847595 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a"} err="failed to get container status \"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a\": rpc error: code = NotFound desc = could not find container \"a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a\": container with ID starting with a23ac68ec524f44d5f0a56974467638223c277e69e270885a60dafd484121c4a not found: ID does not exist" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.847616 4851 scope.go:117] "RemoveContainer" containerID="62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc" Feb 23 13:28:23 crc kubenswrapper[4851]: I0223 13:28:23.847804 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc"} err="failed to get container status \"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc\": rpc error: code = NotFound desc = could not find container \"62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc\": container with ID starting with 62e309d639e09d414e4f19c2e947e2a385c40534ce6e9d48c6a02057b33728bc not found: ID does not exist" Feb 23 13:28:24 crc kubenswrapper[4851]: I0223 13:28:24.023750 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:24 crc kubenswrapper[4851]: W0223 13:28:24.024200 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9380f80_5f4a_4a1d_bc5b_1f0488d32c09.slice/crio-19a3b9038f7eea14eca8407aada243fc740ea4b2a323ca4ce6c448321889fbc7 WatchSource:0}: Error finding container 19a3b9038f7eea14eca8407aada243fc740ea4b2a323ca4ce6c448321889fbc7: Status 404 returned error can't find the container with id 19a3b9038f7eea14eca8407aada243fc740ea4b2a323ca4ce6c448321889fbc7 Feb 23 13:28:24 crc kubenswrapper[4851]: I0223 13:28:24.307784 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:24 crc kubenswrapper[4851]: I0223 13:28:24.589125 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerStarted","Data":"19a3b9038f7eea14eca8407aada243fc740ea4b2a323ca4ce6c448321889fbc7"} Feb 23 13:28:24 crc kubenswrapper[4851]: I0223 13:28:24.590192 4851 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:28:24 crc kubenswrapper[4851]: I0223 13:28:24.590205 4851 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 13:28:24 crc kubenswrapper[4851]: I0223 13:28:24.590526 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-96s8v" event={"ID":"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5","Type":"ContainerStarted","Data":"24a966637d389bf4855d40a33292acd850524301398839949486e23ad5e8f718"} Feb 23 13:28:24 crc kubenswrapper[4851]: I0223 13:28:24.617855 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-96s8v" podStartSLOduration=2.954392618 podStartE2EDuration="11.617835578s" podCreationTimestamp="2026-02-23 13:28:13 +0000 UTC" firstStartedPulling="2026-02-23 13:28:14.890505302 +0000 UTC m=+1249.572208980" lastFinishedPulling="2026-02-23 13:28:23.553948262 +0000 UTC m=+1258.235651940" observedRunningTime="2026-02-23 13:28:24.605659638 +0000 UTC m=+1259.287363326" watchObservedRunningTime="2026-02-23 13:28:24.617835578 +0000 UTC m=+1259.299539256" Feb 23 13:28:24 crc kubenswrapper[4851]: I0223 13:28:24.737572 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:24 crc kubenswrapper[4851]: I0223 13:28:24.738009 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 13:28:25 crc kubenswrapper[4851]: I0223 13:28:25.601222 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerStarted","Data":"910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375"} Feb 23 13:28:25 crc kubenswrapper[4851]: I0223 13:28:25.602027 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerStarted","Data":"b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901"} Feb 23 13:28:26 crc kubenswrapper[4851]: I0223 13:28:26.612197 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerStarted","Data":"cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d"} Feb 23 13:28:28 crc kubenswrapper[4851]: I0223 13:28:28.629931 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerStarted","Data":"48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995"} Feb 23 13:28:28 crc kubenswrapper[4851]: I0223 13:28:28.630061 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="ceilometer-central-agent" containerID="cri-o://b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901" gracePeriod=30 Feb 23 13:28:28 crc kubenswrapper[4851]: I0223 13:28:28.630134 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="proxy-httpd" containerID="cri-o://48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995" gracePeriod=30 Feb 23 13:28:28 crc kubenswrapper[4851]: I0223 13:28:28.630185 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="ceilometer-notification-agent" containerID="cri-o://910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375" gracePeriod=30 Feb 23 13:28:28 crc kubenswrapper[4851]: I0223 13:28:28.630204 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="sg-core" containerID="cri-o://cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d" gracePeriod=30 Feb 23 13:28:28 crc kubenswrapper[4851]: I0223 13:28:28.630526 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 13:28:28 crc kubenswrapper[4851]: I0223 13:28:28.652653 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.499969473 podStartE2EDuration="10.652629412s" podCreationTimestamp="2026-02-23 13:28:18 +0000 UTC" firstStartedPulling="2026-02-23 13:28:24.026353636 +0000 UTC m=+1258.708057304" lastFinishedPulling="2026-02-23 13:28:28.179013565 +0000 UTC m=+1262.860717243" observedRunningTime="2026-02-23 13:28:28.647935291 +0000 UTC m=+1263.329638979" watchObservedRunningTime="2026-02-23 13:28:28.652629412 +0000 UTC m=+1263.334333100" Feb 23 13:28:29 crc kubenswrapper[4851]: I0223 13:28:29.641877 4851 generic.go:334] "Generic (PLEG): container finished" podID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerID="48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995" exitCode=0 Feb 23 13:28:29 crc kubenswrapper[4851]: I0223 13:28:29.642626 4851 generic.go:334] "Generic (PLEG): container finished" podID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerID="cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d" exitCode=2 Feb 23 13:28:29 crc kubenswrapper[4851]: I0223 13:28:29.642696 4851 generic.go:334] "Generic (PLEG): container finished" podID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerID="910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375" exitCode=0 Feb 23 13:28:29 crc kubenswrapper[4851]: I0223 13:28:29.641930 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerDied","Data":"48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995"} Feb 23 13:28:29 crc kubenswrapper[4851]: I0223 13:28:29.642833 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerDied","Data":"cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d"} Feb 23 13:28:29 crc kubenswrapper[4851]: I0223 13:28:29.642908 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerDied","Data":"910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375"} Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.204208 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.284106 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-run-httpd\") pod \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.284253 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-combined-ca-bundle\") pod \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.284291 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-log-httpd\") pod \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.284348 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d67px\" (UniqueName: \"kubernetes.io/projected/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-kube-api-access-d67px\") pod \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.284376 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-sg-core-conf-yaml\") pod \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.284441 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-scripts\") pod \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.284592 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-config-data\") pod \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\" (UID: \"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09\") " Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.287003 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" (UID: "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.287084 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" (UID: "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.306367 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-kube-api-access-d67px" (OuterVolumeSpecName: "kube-api-access-d67px") pod "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" (UID: "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09"). InnerVolumeSpecName "kube-api-access-d67px". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.306512 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-scripts" (OuterVolumeSpecName: "scripts") pod "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" (UID: "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.333357 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" (UID: "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.387964 4851 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.388002 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d67px\" (UniqueName: \"kubernetes.io/projected/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-kube-api-access-d67px\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.388015 4851 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.388026 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.388036 4851 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.393184 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" (UID: "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.403408 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-config-data" (OuterVolumeSpecName: "config-data") pod "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" (UID: "c9380f80-5f4a-4a1d-bc5b-1f0488d32c09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.490286 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.490357 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.694268 4851 generic.go:334] "Generic (PLEG): container finished" podID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerID="b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901" exitCode=0 Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.694352 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.694373 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerDied","Data":"b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901"} Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.694744 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9380f80-5f4a-4a1d-bc5b-1f0488d32c09","Type":"ContainerDied","Data":"19a3b9038f7eea14eca8407aada243fc740ea4b2a323ca4ce6c448321889fbc7"} Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.694771 4851 scope.go:117] "RemoveContainer" containerID="48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.696670 4851 generic.go:334] "Generic (PLEG): container finished" podID="a8cf62f0-5eae-44f2-b264-6e6449fd7ff5" containerID="24a966637d389bf4855d40a33292acd850524301398839949486e23ad5e8f718" exitCode=0 Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.696698 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-96s8v" event={"ID":"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5","Type":"ContainerDied","Data":"24a966637d389bf4855d40a33292acd850524301398839949486e23ad5e8f718"} Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.725698 4851 scope.go:117] "RemoveContainer" containerID="cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.736811 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.745069 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.746271 4851 scope.go:117] "RemoveContainer" containerID="910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.771357 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:35 crc kubenswrapper[4851]: E0223 13:28:35.772030 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="ceilometer-notification-agent" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.772047 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="ceilometer-notification-agent" Feb 23 13:28:35 crc kubenswrapper[4851]: E0223 13:28:35.772066 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="proxy-httpd" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.772073 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="proxy-httpd" Feb 23 13:28:35 crc kubenswrapper[4851]: E0223 13:28:35.772102 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="ceilometer-central-agent" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.772111 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="ceilometer-central-agent" Feb 23 13:28:35 crc kubenswrapper[4851]: E0223 13:28:35.772122 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="sg-core" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.772128 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="sg-core" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.772360 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="ceilometer-central-agent" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.772383 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="sg-core" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.772396 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="proxy-httpd" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.772412 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" containerName="ceilometer-notification-agent" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.778383 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.782253 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.782540 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.790314 4851 scope.go:117] "RemoveContainer" containerID="b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.798608 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.835737 4851 scope.go:117] "RemoveContainer" containerID="48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995" Feb 23 13:28:35 crc kubenswrapper[4851]: E0223 13:28:35.836198 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995\": container with ID starting with 48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995 not found: ID does not exist" containerID="48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.836248 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995"} err="failed to get container status \"48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995\": rpc error: code = NotFound desc = could not find container \"48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995\": container with ID starting with 48d9a217ff773d0f4dffcd3030bee5a098b5e20ae56c77675597f8c0ffca7995 not found: ID does not exist" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.836280 4851 scope.go:117] "RemoveContainer" containerID="cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d" Feb 23 13:28:35 crc kubenswrapper[4851]: E0223 13:28:35.836579 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d\": container with ID starting with cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d not found: ID does not exist" containerID="cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.836621 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d"} err="failed to get container status \"cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d\": rpc error: code = NotFound desc = could not find container \"cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d\": container with ID starting with cbaf456c2c49c4d4334db9d3e679f20c6ac076b05326d46ec49ece06d3a1aa7d not found: ID does not exist" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.836678 4851 scope.go:117] "RemoveContainer" containerID="910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375" Feb 23 13:28:35 crc kubenswrapper[4851]: E0223 13:28:35.836951 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375\": container with ID starting with 910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375 not found: ID does not exist" containerID="910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.836989 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375"} err="failed to get container status \"910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375\": rpc error: code = NotFound desc = could not find container \"910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375\": container with ID starting with 910ac78254f1071a2d228e1d3d197c3a3cf12be8f028baf00612cfe8f37b5375 not found: ID does not exist" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.837010 4851 scope.go:117] "RemoveContainer" containerID="b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901" Feb 23 13:28:35 crc kubenswrapper[4851]: E0223 13:28:35.837224 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901\": container with ID starting with b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901 not found: ID does not exist" containerID="b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.837257 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901"} err="failed to get container status \"b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901\": rpc error: code = NotFound desc = could not find container \"b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901\": container with ID starting with b27acceede66c4e30fb768c3b9b2734361c4cc17b682f27f17d3a2db43426901 not found: ID does not exist" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.907205 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.907246 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-config-data\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.907265 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-run-httpd\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.907358 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.907379 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-log-httpd\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.907451 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-scripts\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.907473 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmtf\" (UniqueName: \"kubernetes.io/projected/02258572-7a66-47c0-a211-87f7f248785a-kube-api-access-jqmtf\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:35 crc kubenswrapper[4851]: I0223 13:28:35.977939 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9380f80-5f4a-4a1d-bc5b-1f0488d32c09" path="/var/lib/kubelet/pods/c9380f80-5f4a-4a1d-bc5b-1f0488d32c09/volumes" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.009066 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqmtf\" (UniqueName: \"kubernetes.io/projected/02258572-7a66-47c0-a211-87f7f248785a-kube-api-access-jqmtf\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.009101 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-scripts\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.009127 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.009145 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-config-data\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.009160 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-run-httpd\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.009231 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.009251 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-log-httpd\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.009951 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-run-httpd\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.011102 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-log-httpd\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.013906 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.015020 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.015358 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-config-data\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.024826 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-scripts\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.030220 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqmtf\" (UniqueName: \"kubernetes.io/projected/02258572-7a66-47c0-a211-87f7f248785a-kube-api-access-jqmtf\") pod \"ceilometer-0\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.114470 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.593057 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:36 crc kubenswrapper[4851]: W0223 13:28:36.595474 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02258572_7a66_47c0_a211_87f7f248785a.slice/crio-a6bfc728ef9171c442333b1d2e02d3bc909a3650f2d8eae37b752ae0794015a1 WatchSource:0}: Error finding container a6bfc728ef9171c442333b1d2e02d3bc909a3650f2d8eae37b752ae0794015a1: Status 404 returned error can't find the container with id a6bfc728ef9171c442333b1d2e02d3bc909a3650f2d8eae37b752ae0794015a1 Feb 23 13:28:36 crc kubenswrapper[4851]: I0223 13:28:36.713428 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerStarted","Data":"a6bfc728ef9171c442333b1d2e02d3bc909a3650f2d8eae37b752ae0794015a1"} Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.029632 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.129809 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rml6w\" (UniqueName: \"kubernetes.io/projected/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-kube-api-access-rml6w\") pod \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.129937 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-combined-ca-bundle\") pod \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.130097 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-scripts\") pod \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.130152 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-config-data\") pod \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\" (UID: \"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5\") " Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.133528 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-scripts" (OuterVolumeSpecName: "scripts") pod "a8cf62f0-5eae-44f2-b264-6e6449fd7ff5" (UID: "a8cf62f0-5eae-44f2-b264-6e6449fd7ff5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.134088 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-kube-api-access-rml6w" (OuterVolumeSpecName: "kube-api-access-rml6w") pod "a8cf62f0-5eae-44f2-b264-6e6449fd7ff5" (UID: "a8cf62f0-5eae-44f2-b264-6e6449fd7ff5"). InnerVolumeSpecName "kube-api-access-rml6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.158128 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8cf62f0-5eae-44f2-b264-6e6449fd7ff5" (UID: "a8cf62f0-5eae-44f2-b264-6e6449fd7ff5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.160470 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-config-data" (OuterVolumeSpecName: "config-data") pod "a8cf62f0-5eae-44f2-b264-6e6449fd7ff5" (UID: "a8cf62f0-5eae-44f2-b264-6e6449fd7ff5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.232598 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rml6w\" (UniqueName: \"kubernetes.io/projected/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-kube-api-access-rml6w\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.232908 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.232920 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.232929 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.349870 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.733782 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerStarted","Data":"0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811"} Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.735749 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-96s8v" event={"ID":"a8cf62f0-5eae-44f2-b264-6e6449fd7ff5","Type":"ContainerDied","Data":"46ee25e257a9947a9e86d507fbdf311bfad1732ba183e0381043ed44555d9a73"} Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.735785 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46ee25e257a9947a9e86d507fbdf311bfad1732ba183e0381043ed44555d9a73" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.735811 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-96s8v" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.833625 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 13:28:37 crc kubenswrapper[4851]: E0223 13:28:37.834424 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8cf62f0-5eae-44f2-b264-6e6449fd7ff5" containerName="nova-cell0-conductor-db-sync" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.834439 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8cf62f0-5eae-44f2-b264-6e6449fd7ff5" containerName="nova-cell0-conductor-db-sync" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.834614 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8cf62f0-5eae-44f2-b264-6e6449fd7ff5" containerName="nova-cell0-conductor-db-sync" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.836174 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.837918 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.839007 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-wldbk" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.854433 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.945822 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hw46\" (UniqueName: \"kubernetes.io/projected/f7011aa2-a15d-4c99-b0a1-ae8d530b84c2-kube-api-access-4hw46\") pod \"nova-cell0-conductor-0\" (UID: \"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.946165 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7011aa2-a15d-4c99-b0a1-ae8d530b84c2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:37 crc kubenswrapper[4851]: I0223 13:28:37.946248 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7011aa2-a15d-4c99-b0a1-ae8d530b84c2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.047927 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7011aa2-a15d-4c99-b0a1-ae8d530b84c2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.048056 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hw46\" (UniqueName: \"kubernetes.io/projected/f7011aa2-a15d-4c99-b0a1-ae8d530b84c2-kube-api-access-4hw46\") pod \"nova-cell0-conductor-0\" (UID: \"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.048107 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7011aa2-a15d-4c99-b0a1-ae8d530b84c2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.052578 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7011aa2-a15d-4c99-b0a1-ae8d530b84c2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.052701 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7011aa2-a15d-4c99-b0a1-ae8d530b84c2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.066926 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hw46\" (UniqueName: \"kubernetes.io/projected/f7011aa2-a15d-4c99-b0a1-ae8d530b84c2-kube-api-access-4hw46\") pod \"nova-cell0-conductor-0\" (UID: \"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2\") " pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.168496 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.634941 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 13:28:38 crc kubenswrapper[4851]: W0223 13:28:38.645725 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7011aa2_a15d_4c99_b0a1_ae8d530b84c2.slice/crio-7b175b71e1665af6012546ab5592f25c756070bc053ed01d082bdcd8852b564f WatchSource:0}: Error finding container 7b175b71e1665af6012546ab5592f25c756070bc053ed01d082bdcd8852b564f: Status 404 returned error can't find the container with id 7b175b71e1665af6012546ab5592f25c756070bc053ed01d082bdcd8852b564f Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.745457 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerStarted","Data":"0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7"} Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.745502 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerStarted","Data":"1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113"} Feb 23 13:28:38 crc kubenswrapper[4851]: I0223 13:28:38.746961 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2","Type":"ContainerStarted","Data":"7b175b71e1665af6012546ab5592f25c756070bc053ed01d082bdcd8852b564f"} Feb 23 13:28:39 crc kubenswrapper[4851]: I0223 13:28:39.757653 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f7011aa2-a15d-4c99-b0a1-ae8d530b84c2","Type":"ContainerStarted","Data":"efc19125f86661635943eb510cf4bbabf7a0b0169c72e2f95ce9c7fd55832e20"} Feb 23 13:28:39 crc kubenswrapper[4851]: I0223 13:28:39.758061 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:39 crc kubenswrapper[4851]: I0223 13:28:39.778687 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.778644372 podStartE2EDuration="2.778644372s" podCreationTimestamp="2026-02-23 13:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:39.777548091 +0000 UTC m=+1274.459251779" watchObservedRunningTime="2026-02-23 13:28:39.778644372 +0000 UTC m=+1274.460348060" Feb 23 13:28:40 crc kubenswrapper[4851]: I0223 13:28:40.771619 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerStarted","Data":"adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c"} Feb 23 13:28:40 crc kubenswrapper[4851]: I0223 13:28:40.772858 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="sg-core" containerID="cri-o://0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7" gracePeriod=30 Feb 23 13:28:40 crc kubenswrapper[4851]: I0223 13:28:40.772885 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="ceilometer-notification-agent" containerID="cri-o://1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113" gracePeriod=30 Feb 23 13:28:40 crc kubenswrapper[4851]: I0223 13:28:40.772914 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="proxy-httpd" containerID="cri-o://adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c" gracePeriod=30 Feb 23 13:28:40 crc kubenswrapper[4851]: I0223 13:28:40.773013 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="ceilometer-central-agent" containerID="cri-o://0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811" gracePeriod=30 Feb 23 13:28:40 crc kubenswrapper[4851]: I0223 13:28:40.795008 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.416749455 podStartE2EDuration="5.794981509s" podCreationTimestamp="2026-02-23 13:28:35 +0000 UTC" firstStartedPulling="2026-02-23 13:28:36.597691242 +0000 UTC m=+1271.279394920" lastFinishedPulling="2026-02-23 13:28:39.975923306 +0000 UTC m=+1274.657626974" observedRunningTime="2026-02-23 13:28:40.793540399 +0000 UTC m=+1275.475244087" watchObservedRunningTime="2026-02-23 13:28:40.794981509 +0000 UTC m=+1275.476685187" Feb 23 13:28:41 crc kubenswrapper[4851]: I0223 13:28:41.781735 4851 generic.go:334] "Generic (PLEG): container finished" podID="02258572-7a66-47c0-a211-87f7f248785a" containerID="adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c" exitCode=0 Feb 23 13:28:41 crc kubenswrapper[4851]: I0223 13:28:41.782004 4851 generic.go:334] "Generic (PLEG): container finished" podID="02258572-7a66-47c0-a211-87f7f248785a" containerID="0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7" exitCode=2 Feb 23 13:28:41 crc kubenswrapper[4851]: I0223 13:28:41.782017 4851 generic.go:334] "Generic (PLEG): container finished" podID="02258572-7a66-47c0-a211-87f7f248785a" containerID="1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113" exitCode=0 Feb 23 13:28:41 crc kubenswrapper[4851]: I0223 13:28:41.782049 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerDied","Data":"adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c"} Feb 23 13:28:41 crc kubenswrapper[4851]: I0223 13:28:41.782086 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerDied","Data":"0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7"} Feb 23 13:28:41 crc kubenswrapper[4851]: I0223 13:28:41.782097 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerDied","Data":"1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113"} Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.199317 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.714018 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7l76c"] Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.715745 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.719413 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.726894 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.733691 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l76c"] Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.847582 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r59rb\" (UniqueName: \"kubernetes.io/projected/0f4e3d1b-bc68-4384-9d6d-b4712e543629-kube-api-access-r59rb\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.847663 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-scripts\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.847713 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-config-data\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.847813 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.949584 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r59rb\" (UniqueName: \"kubernetes.io/projected/0f4e3d1b-bc68-4384-9d6d-b4712e543629-kube-api-access-r59rb\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.949655 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-scripts\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.949695 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-config-data\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.949759 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.955173 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.958004 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-config-data\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:43 crc kubenswrapper[4851]: I0223 13:28:43.962898 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-scripts\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.003655 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r59rb\" (UniqueName: \"kubernetes.io/projected/0f4e3d1b-bc68-4384-9d6d-b4712e543629-kube-api-access-r59rb\") pod \"nova-cell0-cell-mapping-7l76c\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.043206 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.062661 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.092249 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.138746 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.145587 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.147928 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.159154 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.159256 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-config-data\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.159315 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed86558-2053-47b0-9cb8-0cae6602c52d-logs\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.160706 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.164641 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwk9\" (UniqueName: \"kubernetes.io/projected/eed86558-2053-47b0-9cb8-0cae6602c52d-kube-api-access-6pwk9\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.164823 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.195715 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.262640 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.264143 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.268420 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.268545 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.268642 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-config-data\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.268739 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dkv\" (UniqueName: \"kubernetes.io/projected/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-kube-api-access-74dkv\") pod \"nova-scheduler-0\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.268815 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed86558-2053-47b0-9cb8-0cae6602c52d-logs\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.268941 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-config-data\") pod \"nova-scheduler-0\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.269048 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pwk9\" (UniqueName: \"kubernetes.io/projected/eed86558-2053-47b0-9cb8-0cae6602c52d-kube-api-access-6pwk9\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.271682 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.273568 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed86558-2053-47b0-9cb8-0cae6602c52d-logs\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.278066 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-config-data\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.307421 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.312983 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pwk9\" (UniqueName: \"kubernetes.io/projected/eed86558-2053-47b0-9cb8-0cae6602c52d-kube-api-access-6pwk9\") pod \"nova-api-0\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.334397 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.355406 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.371151 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.376955 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-config-data\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.377021 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dkv\" (UniqueName: \"kubernetes.io/projected/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-kube-api-access-74dkv\") pod \"nova-scheduler-0\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.377069 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-logs\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.377101 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmq2b\" (UniqueName: \"kubernetes.io/projected/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-kube-api-access-xmq2b\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.377141 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-config-data\") pod \"nova-scheduler-0\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.377242 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.377305 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.378736 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.385885 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.386979 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-config-data\") pod \"nova-scheduler-0\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.404211 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.407668 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dkv\" (UniqueName: \"kubernetes.io/projected/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-kube-api-access-74dkv\") pod \"nova-scheduler-0\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.418408 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5cm4d"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.419936 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.428128 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.446773 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5cm4d"] Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478495 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478539 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjgts\" (UniqueName: \"kubernetes.io/projected/de80e2ee-94b2-41df-ada8-0af01ef7575b-kube-api-access-gjgts\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478582 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478618 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-config-data\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478639 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5zhb\" (UniqueName: \"kubernetes.io/projected/43e1da7c-2638-42c6-9852-f5882a1acce3-kube-api-access-s5zhb\") pod \"nova-cell1-novncproxy-0\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478708 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-config\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478731 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478759 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-logs\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478801 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmq2b\" (UniqueName: \"kubernetes.io/projected/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-kube-api-access-xmq2b\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478847 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478901 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478916 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.478937 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.480502 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-logs\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.486438 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.490813 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-config-data\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.502238 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmq2b\" (UniqueName: \"kubernetes.io/projected/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-kube-api-access-xmq2b\") pod \"nova-metadata-0\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.573544 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.581304 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-config\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.581609 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.581676 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.581733 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.581751 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.581800 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.581825 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjgts\" (UniqueName: \"kubernetes.io/projected/de80e2ee-94b2-41df-ada8-0af01ef7575b-kube-api-access-gjgts\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.581867 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.581894 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5zhb\" (UniqueName: \"kubernetes.io/projected/43e1da7c-2638-42c6-9852-f5882a1acce3-kube-api-access-s5zhb\") pod \"nova-cell1-novncproxy-0\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.582894 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.584237 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.584990 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-config\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.585114 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.585732 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.589878 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.589967 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.601588 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjgts\" (UniqueName: \"kubernetes.io/projected/de80e2ee-94b2-41df-ada8-0af01ef7575b-kube-api-access-gjgts\") pod \"dnsmasq-dns-757b4f8459-5cm4d\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.601588 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5zhb\" (UniqueName: \"kubernetes.io/projected/43e1da7c-2638-42c6-9852-f5882a1acce3-kube-api-access-s5zhb\") pod \"nova-cell1-novncproxy-0\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.731866 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.750059 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.769722 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:44 crc kubenswrapper[4851]: I0223 13:28:44.862360 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l76c"] Feb 23 13:28:44 crc kubenswrapper[4851]: W0223 13:28:44.872054 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f4e3d1b_bc68_4384_9d6d_b4712e543629.slice/crio-19f1fee9f4340c988842d61dbb86d82e52bd51b647208b50866f2e95faa3dded WatchSource:0}: Error finding container 19f1fee9f4340c988842d61dbb86d82e52bd51b647208b50866f2e95faa3dded: Status 404 returned error can't find the container with id 19f1fee9f4340c988842d61dbb86d82e52bd51b647208b50866f2e95faa3dded Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.049581 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.181527 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.214383 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cz56r"] Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.216482 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.220521 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.221481 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.236054 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cz56r"] Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.293766 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.296542 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.296664 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf6jk\" (UniqueName: \"kubernetes.io/projected/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-kube-api-access-xf6jk\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.296765 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-config-data\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.296819 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-scripts\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.400161 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-config-data\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.400224 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-scripts\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.400268 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.400344 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf6jk\" (UniqueName: \"kubernetes.io/projected/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-kube-api-access-xf6jk\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.410237 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.410261 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-scripts\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.411072 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-config-data\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.420510 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf6jk\" (UniqueName: \"kubernetes.io/projected/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-kube-api-access-xf6jk\") pod \"nova-cell1-conductor-db-sync-cz56r\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.463917 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.546522 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.546530 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5cm4d"] Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.840728 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"43e1da7c-2638-42c6-9852-f5882a1acce3","Type":"ContainerStarted","Data":"881099245ebed660a5b4a99557985226476fad3dc691026b1d9dbbec9c679b54"} Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.842952 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eed86558-2053-47b0-9cb8-0cae6602c52d","Type":"ContainerStarted","Data":"c86dbe91991b65b79768ef998df918345eff3bde8cf8fea5bf9516b04537efea"} Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.844198 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7","Type":"ContainerStarted","Data":"6fe2c71e5bb04eef88846a8691c22018bb2421eec97476b55690b495aa13002c"} Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.845417 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8bee9394-d919-4f9a-8b28-4ae318bcf0a7","Type":"ContainerStarted","Data":"6c857c3c429ec8f0b0147a67b535aa5772d36c2245b79a38a2df3a9f018c52d7"} Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.846857 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" event={"ID":"de80e2ee-94b2-41df-ada8-0af01ef7575b","Type":"ContainerStarted","Data":"f92a4d12446a80614a0d8f7a0a3b85d687d5410ac9d6a27f944258e51a99779d"} Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.848136 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l76c" event={"ID":"0f4e3d1b-bc68-4384-9d6d-b4712e543629","Type":"ContainerStarted","Data":"db533a071b07e5501e6b9d85464a687c9c14af8b9d432f34f718d23fa665b593"} Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.848347 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l76c" event={"ID":"0f4e3d1b-bc68-4384-9d6d-b4712e543629","Type":"ContainerStarted","Data":"19f1fee9f4340c988842d61dbb86d82e52bd51b647208b50866f2e95faa3dded"} Feb 23 13:28:45 crc kubenswrapper[4851]: I0223 13:28:45.875295 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7l76c" podStartSLOduration=2.875260937 podStartE2EDuration="2.875260937s" podCreationTimestamp="2026-02-23 13:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:45.872723316 +0000 UTC m=+1280.554427004" watchObservedRunningTime="2026-02-23 13:28:45.875260937 +0000 UTC m=+1280.556964615" Feb 23 13:28:46 crc kubenswrapper[4851]: I0223 13:28:46.042457 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cz56r"] Feb 23 13:28:46 crc kubenswrapper[4851]: I0223 13:28:46.865258 4851 generic.go:334] "Generic (PLEG): container finished" podID="de80e2ee-94b2-41df-ada8-0af01ef7575b" containerID="456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d" exitCode=0 Feb 23 13:28:46 crc kubenswrapper[4851]: I0223 13:28:46.865686 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" event={"ID":"de80e2ee-94b2-41df-ada8-0af01ef7575b","Type":"ContainerDied","Data":"456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d"} Feb 23 13:28:46 crc kubenswrapper[4851]: I0223 13:28:46.870452 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cz56r" event={"ID":"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c","Type":"ContainerStarted","Data":"7975ab1c739789a9c60304386c65786134384f35bdb60bc7a91c769d71037b49"} Feb 23 13:28:46 crc kubenswrapper[4851]: I0223 13:28:46.870530 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cz56r" event={"ID":"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c","Type":"ContainerStarted","Data":"ad3809a2d9e0b172afade60d192be3e8af9f584e12492e3dd71b322127f7bae3"} Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.306504 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cz56r" podStartSLOduration=3.306432109 podStartE2EDuration="3.306432109s" podCreationTimestamp="2026-02-23 13:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:46.914177214 +0000 UTC m=+1281.595880892" watchObservedRunningTime="2026-02-23 13:28:48.306432109 +0000 UTC m=+1282.988135787" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.315378 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.327683 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.656870 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.798465 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-sg-core-conf-yaml\") pod \"02258572-7a66-47c0-a211-87f7f248785a\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.798537 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-config-data\") pod \"02258572-7a66-47c0-a211-87f7f248785a\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.798692 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-combined-ca-bundle\") pod \"02258572-7a66-47c0-a211-87f7f248785a\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.798740 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-run-httpd\") pod \"02258572-7a66-47c0-a211-87f7f248785a\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.798763 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-log-httpd\") pod \"02258572-7a66-47c0-a211-87f7f248785a\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.798791 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqmtf\" (UniqueName: \"kubernetes.io/projected/02258572-7a66-47c0-a211-87f7f248785a-kube-api-access-jqmtf\") pod \"02258572-7a66-47c0-a211-87f7f248785a\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.798808 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-scripts\") pod \"02258572-7a66-47c0-a211-87f7f248785a\" (UID: \"02258572-7a66-47c0-a211-87f7f248785a\") " Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.799713 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "02258572-7a66-47c0-a211-87f7f248785a" (UID: "02258572-7a66-47c0-a211-87f7f248785a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.800042 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "02258572-7a66-47c0-a211-87f7f248785a" (UID: "02258572-7a66-47c0-a211-87f7f248785a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.802461 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-scripts" (OuterVolumeSpecName: "scripts") pod "02258572-7a66-47c0-a211-87f7f248785a" (UID: "02258572-7a66-47c0-a211-87f7f248785a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.811520 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02258572-7a66-47c0-a211-87f7f248785a-kube-api-access-jqmtf" (OuterVolumeSpecName: "kube-api-access-jqmtf") pod "02258572-7a66-47c0-a211-87f7f248785a" (UID: "02258572-7a66-47c0-a211-87f7f248785a"). InnerVolumeSpecName "kube-api-access-jqmtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.843713 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "02258572-7a66-47c0-a211-87f7f248785a" (UID: "02258572-7a66-47c0-a211-87f7f248785a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.902402 4851 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.902442 4851 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02258572-7a66-47c0-a211-87f7f248785a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.902458 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqmtf\" (UniqueName: \"kubernetes.io/projected/02258572-7a66-47c0-a211-87f7f248785a-kube-api-access-jqmtf\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.902471 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.902483 4851 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.915523 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02258572-7a66-47c0-a211-87f7f248785a" (UID: "02258572-7a66-47c0-a211-87f7f248785a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.918756 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7","Type":"ContainerStarted","Data":"a7e8111d37499d62c119f7636ae287b26f914e06057dc63662efc4f2d85beccc"} Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.918803 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7","Type":"ContainerStarted","Data":"905791ceb681eb949bb445c2d5c46d8af82e92f7af5f72d1e97eff78d705fa75"} Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.918926 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerName="nova-metadata-log" containerID="cri-o://905791ceb681eb949bb445c2d5c46d8af82e92f7af5f72d1e97eff78d705fa75" gracePeriod=30 Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.919373 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerName="nova-metadata-metadata" containerID="cri-o://a7e8111d37499d62c119f7636ae287b26f914e06057dc63662efc4f2d85beccc" gracePeriod=30 Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.923876 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8bee9394-d919-4f9a-8b28-4ae318bcf0a7","Type":"ContainerStarted","Data":"b11d7df342491cd095e77ec30f2eb74397c86171d421345ee72a63ca00340e7d"} Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.927920 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" event={"ID":"de80e2ee-94b2-41df-ada8-0af01ef7575b","Type":"ContainerStarted","Data":"585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46"} Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.928701 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.937534 4851 generic.go:334] "Generic (PLEG): container finished" podID="02258572-7a66-47c0-a211-87f7f248785a" containerID="0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811" exitCode=0 Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.937644 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerDied","Data":"0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811"} Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.937705 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02258572-7a66-47c0-a211-87f7f248785a","Type":"ContainerDied","Data":"a6bfc728ef9171c442333b1d2e02d3bc909a3650f2d8eae37b752ae0794015a1"} Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.937727 4851 scope.go:117] "RemoveContainer" containerID="adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.937900 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.941016 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.8789881899999998 podStartE2EDuration="4.940007148s" podCreationTimestamp="2026-02-23 13:28:44 +0000 UTC" firstStartedPulling="2026-02-23 13:28:45.286681915 +0000 UTC m=+1279.968385593" lastFinishedPulling="2026-02-23 13:28:48.347700873 +0000 UTC m=+1283.029404551" observedRunningTime="2026-02-23 13:28:48.933239259 +0000 UTC m=+1283.614942957" watchObservedRunningTime="2026-02-23 13:28:48.940007148 +0000 UTC m=+1283.621710846" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.946066 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"43e1da7c-2638-42c6-9852-f5882a1acce3","Type":"ContainerStarted","Data":"9f8e92cf331d342af949ca1b98b5942dd0654e6b237998d3ac175714b68ba349"} Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.946127 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="43e1da7c-2638-42c6-9852-f5882a1acce3" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9f8e92cf331d342af949ca1b98b5942dd0654e6b237998d3ac175714b68ba349" gracePeriod=30 Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.956151 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-config-data" (OuterVolumeSpecName: "config-data") pod "02258572-7a66-47c0-a211-87f7f248785a" (UID: "02258572-7a66-47c0-a211-87f7f248785a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.957888 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eed86558-2053-47b0-9cb8-0cae6602c52d","Type":"ContainerStarted","Data":"03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b"} Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.959403 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" podStartSLOduration=4.95938264 podStartE2EDuration="4.95938264s" podCreationTimestamp="2026-02-23 13:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:48.949677929 +0000 UTC m=+1283.631381617" watchObservedRunningTime="2026-02-23 13:28:48.95938264 +0000 UTC m=+1283.641086328" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.975270 4851 scope.go:117] "RemoveContainer" containerID="0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7" Feb 23 13:28:48 crc kubenswrapper[4851]: I0223 13:28:48.988209 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.698316401 podStartE2EDuration="4.988182835s" podCreationTimestamp="2026-02-23 13:28:44 +0000 UTC" firstStartedPulling="2026-02-23 13:28:45.057224282 +0000 UTC m=+1279.738927960" lastFinishedPulling="2026-02-23 13:28:48.347090716 +0000 UTC m=+1283.028794394" observedRunningTime="2026-02-23 13:28:48.967751144 +0000 UTC m=+1283.649454832" watchObservedRunningTime="2026-02-23 13:28:48.988182835 +0000 UTC m=+1283.669886533" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.003813 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.003846 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02258572-7a66-47c0-a211-87f7f248785a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.003996 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.106626653 podStartE2EDuration="5.003981596s" podCreationTimestamp="2026-02-23 13:28:44 +0000 UTC" firstStartedPulling="2026-02-23 13:28:45.452304215 +0000 UTC m=+1280.134007893" lastFinishedPulling="2026-02-23 13:28:48.349659158 +0000 UTC m=+1283.031362836" observedRunningTime="2026-02-23 13:28:48.98263743 +0000 UTC m=+1283.664341128" watchObservedRunningTime="2026-02-23 13:28:49.003981596 +0000 UTC m=+1283.685685284" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.011847 4851 scope.go:117] "RemoveContainer" containerID="1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.015921 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.8327896190000001 podStartE2EDuration="5.015900599s" podCreationTimestamp="2026-02-23 13:28:44 +0000 UTC" firstStartedPulling="2026-02-23 13:28:45.176522416 +0000 UTC m=+1279.858226094" lastFinishedPulling="2026-02-23 13:28:48.359633396 +0000 UTC m=+1283.041337074" observedRunningTime="2026-02-23 13:28:49.001508437 +0000 UTC m=+1283.683212145" watchObservedRunningTime="2026-02-23 13:28:49.015900599 +0000 UTC m=+1283.697604267" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.052872 4851 scope.go:117] "RemoveContainer" containerID="0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.073017 4851 scope.go:117] "RemoveContainer" containerID="adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c" Feb 23 13:28:49 crc kubenswrapper[4851]: E0223 13:28:49.073776 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c\": container with ID starting with adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c not found: ID does not exist" containerID="adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.074092 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c"} err="failed to get container status \"adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c\": rpc error: code = NotFound desc = could not find container \"adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c\": container with ID starting with adc1fd8bfcd48ebbc342061940175503446623f2fde9ec400757649fdb8ede4c not found: ID does not exist" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.074112 4851 scope.go:117] "RemoveContainer" containerID="0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7" Feb 23 13:28:49 crc kubenswrapper[4851]: E0223 13:28:49.074755 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7\": container with ID starting with 0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7 not found: ID does not exist" containerID="0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.074799 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7"} err="failed to get container status \"0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7\": rpc error: code = NotFound desc = could not find container \"0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7\": container with ID starting with 0e44e35d5ae8b3571bc3a2005aabd4e1acea3e98f700a51f125e341a06f744a7 not found: ID does not exist" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.074832 4851 scope.go:117] "RemoveContainer" containerID="1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113" Feb 23 13:28:49 crc kubenswrapper[4851]: E0223 13:28:49.075217 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113\": container with ID starting with 1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113 not found: ID does not exist" containerID="1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.075277 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113"} err="failed to get container status \"1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113\": rpc error: code = NotFound desc = could not find container \"1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113\": container with ID starting with 1bc98bef4ec76c491c889e1f12235ef26745913aa77a7f3c4e485a0bbc562113 not found: ID does not exist" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.075297 4851 scope.go:117] "RemoveContainer" containerID="0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811" Feb 23 13:28:49 crc kubenswrapper[4851]: E0223 13:28:49.075743 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811\": container with ID starting with 0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811 not found: ID does not exist" containerID="0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.075774 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811"} err="failed to get container status \"0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811\": rpc error: code = NotFound desc = could not find container \"0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811\": container with ID starting with 0ecb5db5c8b6cc33c7d5967912fd655b3360e742acacc6530ef387f094f47811 not found: ID does not exist" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.335435 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.344984 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.368556 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:49 crc kubenswrapper[4851]: E0223 13:28:49.368968 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="ceilometer-notification-agent" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.368985 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="ceilometer-notification-agent" Feb 23 13:28:49 crc kubenswrapper[4851]: E0223 13:28:49.369022 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="proxy-httpd" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.369030 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="proxy-httpd" Feb 23 13:28:49 crc kubenswrapper[4851]: E0223 13:28:49.369041 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="ceilometer-central-agent" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.369047 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="ceilometer-central-agent" Feb 23 13:28:49 crc kubenswrapper[4851]: E0223 13:28:49.369061 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="sg-core" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.369066 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="sg-core" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.369249 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="ceilometer-notification-agent" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.369264 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="ceilometer-central-agent" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.369271 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="sg-core" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.369282 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="02258572-7a66-47c0-a211-87f7f248785a" containerName="proxy-httpd" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.371992 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.376280 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.376726 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.382256 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.428953 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.517842 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-config-data\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.518132 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-log-httpd\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.518212 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqtgn\" (UniqueName: \"kubernetes.io/projected/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-kube-api-access-dqtgn\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.518311 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.518576 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-run-httpd\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.518613 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.518831 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-scripts\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.619941 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-run-httpd\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.619986 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.620033 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-scripts\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.620070 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-config-data\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.620130 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-log-httpd\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.620148 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqtgn\" (UniqueName: \"kubernetes.io/projected/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-kube-api-access-dqtgn\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.620171 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.620453 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-run-httpd\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.620884 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-log-httpd\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.625972 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.626310 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-config-data\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.627953 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-scripts\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.633452 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.664546 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqtgn\" (UniqueName: \"kubernetes.io/projected/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-kube-api-access-dqtgn\") pod \"ceilometer-0\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.732200 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.732850 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.732742 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.751160 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:28:49 crc kubenswrapper[4851]: I0223 13:28:49.992064 4851 generic.go:334] "Generic (PLEG): container finished" podID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerID="905791ceb681eb949bb445c2d5c46d8af82e92f7af5f72d1e97eff78d705fa75" exitCode=143 Feb 23 13:28:50 crc kubenswrapper[4851]: I0223 13:28:50.011121 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02258572-7a66-47c0-a211-87f7f248785a" path="/var/lib/kubelet/pods/02258572-7a66-47c0-a211-87f7f248785a/volumes" Feb 23 13:28:50 crc kubenswrapper[4851]: I0223 13:28:50.012280 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eed86558-2053-47b0-9cb8-0cae6602c52d","Type":"ContainerStarted","Data":"7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a"} Feb 23 13:28:50 crc kubenswrapper[4851]: I0223 13:28:50.012316 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7","Type":"ContainerDied","Data":"905791ceb681eb949bb445c2d5c46d8af82e92f7af5f72d1e97eff78d705fa75"} Feb 23 13:28:50 crc kubenswrapper[4851]: W0223 13:28:50.181095 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c8627db_a6c6_4cff_ac2c_dee51bd5eb6d.slice/crio-673e78a8225aedbc7f8ac2915c4ae2fda92bb39545da4fd86800d0960beb8724 WatchSource:0}: Error finding container 673e78a8225aedbc7f8ac2915c4ae2fda92bb39545da4fd86800d0960beb8724: Status 404 returned error can't find the container with id 673e78a8225aedbc7f8ac2915c4ae2fda92bb39545da4fd86800d0960beb8724 Feb 23 13:28:50 crc kubenswrapper[4851]: I0223 13:28:50.187240 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:28:51 crc kubenswrapper[4851]: I0223 13:28:51.001303 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerStarted","Data":"d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87"} Feb 23 13:28:51 crc kubenswrapper[4851]: I0223 13:28:51.001617 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerStarted","Data":"673e78a8225aedbc7f8ac2915c4ae2fda92bb39545da4fd86800d0960beb8724"} Feb 23 13:28:52 crc kubenswrapper[4851]: I0223 13:28:52.017819 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerStarted","Data":"152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f"} Feb 23 13:28:53 crc kubenswrapper[4851]: I0223 13:28:53.028180 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerStarted","Data":"dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e"} Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.037796 4851 generic.go:334] "Generic (PLEG): container finished" podID="0f4e3d1b-bc68-4384-9d6d-b4712e543629" containerID="db533a071b07e5501e6b9d85464a687c9c14af8b9d432f34f718d23fa665b593" exitCode=0 Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.037868 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l76c" event={"ID":"0f4e3d1b-bc68-4384-9d6d-b4712e543629","Type":"ContainerDied","Data":"db533a071b07e5501e6b9d85464a687c9c14af8b9d432f34f718d23fa665b593"} Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.040258 4851 generic.go:334] "Generic (PLEG): container finished" podID="6c4e808d-2f6c-4882-8cf1-32bd909b3d5c" containerID="7975ab1c739789a9c60304386c65786134384f35bdb60bc7a91c769d71037b49" exitCode=0 Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.040362 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cz56r" event={"ID":"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c","Type":"ContainerDied","Data":"7975ab1c739789a9c60304386c65786134384f35bdb60bc7a91c769d71037b49"} Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.429351 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.455145 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.574853 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.575177 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.771508 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.847025 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d82rc"] Feb 23 13:28:54 crc kubenswrapper[4851]: I0223 13:28:54.847298 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" podUID="e027da10-b05a-4f28-b1d0-763534dcef95" containerName="dnsmasq-dns" containerID="cri-o://f0ad73ef9bc207ec19ccd223f0a31bbd576991976f71e9c6a3753fe48f19b9b2" gracePeriod=10 Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.051914 4851 generic.go:334] "Generic (PLEG): container finished" podID="e027da10-b05a-4f28-b1d0-763534dcef95" containerID="f0ad73ef9bc207ec19ccd223f0a31bbd576991976f71e9c6a3753fe48f19b9b2" exitCode=0 Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.051988 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" event={"ID":"e027da10-b05a-4f28-b1d0-763534dcef95","Type":"ContainerDied","Data":"f0ad73ef9bc207ec19ccd223f0a31bbd576991976f71e9c6a3753fe48f19b9b2"} Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.058374 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerStarted","Data":"be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90"} Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.059863 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 13:28:55 crc kubenswrapper[4851]: E0223 13:28:55.068825 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode027da10_b05a_4f28_b1d0_763534dcef95.slice/crio-f0ad73ef9bc207ec19ccd223f0a31bbd576991976f71e9c6a3753fe48f19b9b2.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.095683 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.190000076 podStartE2EDuration="6.095639132s" podCreationTimestamp="2026-02-23 13:28:49 +0000 UTC" firstStartedPulling="2026-02-23 13:28:50.183970197 +0000 UTC m=+1284.865673875" lastFinishedPulling="2026-02-23 13:28:54.089609243 +0000 UTC m=+1288.771312931" observedRunningTime="2026-02-23 13:28:55.095295423 +0000 UTC m=+1289.776999121" watchObservedRunningTime="2026-02-23 13:28:55.095639132 +0000 UTC m=+1289.777342810" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.106893 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.476647 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.611621 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.616500 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.662797 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.663216 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.663792 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hnf5\" (UniqueName: \"kubernetes.io/projected/e027da10-b05a-4f28-b1d0-763534dcef95-kube-api-access-7hnf5\") pod \"e027da10-b05a-4f28-b1d0-763534dcef95\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.663886 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-svc\") pod \"e027da10-b05a-4f28-b1d0-763534dcef95\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.664046 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-sb\") pod \"e027da10-b05a-4f28-b1d0-763534dcef95\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.664589 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-nb\") pod \"e027da10-b05a-4f28-b1d0-763534dcef95\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.664677 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-config\") pod \"e027da10-b05a-4f28-b1d0-763534dcef95\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.664775 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-swift-storage-0\") pod \"e027da10-b05a-4f28-b1d0-763534dcef95\" (UID: \"e027da10-b05a-4f28-b1d0-763534dcef95\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.694477 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e027da10-b05a-4f28-b1d0-763534dcef95-kube-api-access-7hnf5" (OuterVolumeSpecName: "kube-api-access-7hnf5") pod "e027da10-b05a-4f28-b1d0-763534dcef95" (UID: "e027da10-b05a-4f28-b1d0-763534dcef95"). InnerVolumeSpecName "kube-api-access-7hnf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.740430 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e027da10-b05a-4f28-b1d0-763534dcef95" (UID: "e027da10-b05a-4f28-b1d0-763534dcef95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.747772 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e027da10-b05a-4f28-b1d0-763534dcef95" (UID: "e027da10-b05a-4f28-b1d0-763534dcef95"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.758401 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e027da10-b05a-4f28-b1d0-763534dcef95" (UID: "e027da10-b05a-4f28-b1d0-763534dcef95"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.769710 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-combined-ca-bundle\") pod \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.769762 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r59rb\" (UniqueName: \"kubernetes.io/projected/0f4e3d1b-bc68-4384-9d6d-b4712e543629-kube-api-access-r59rb\") pod \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.769796 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-scripts\") pod \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.769827 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf6jk\" (UniqueName: \"kubernetes.io/projected/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-kube-api-access-xf6jk\") pod \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.769874 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-config-data\") pod \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\" (UID: \"0f4e3d1b-bc68-4384-9d6d-b4712e543629\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.769953 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-config-data\") pod \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.770000 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-combined-ca-bundle\") pod \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.770033 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-scripts\") pod \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\" (UID: \"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c\") " Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.770412 4851 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.770422 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hnf5\" (UniqueName: \"kubernetes.io/projected/e027da10-b05a-4f28-b1d0-763534dcef95-kube-api-access-7hnf5\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.770433 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.770443 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.771080 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-config" (OuterVolumeSpecName: "config") pod "e027da10-b05a-4f28-b1d0-763534dcef95" (UID: "e027da10-b05a-4f28-b1d0-763534dcef95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.773211 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-scripts" (OuterVolumeSpecName: "scripts") pod "6c4e808d-2f6c-4882-8cf1-32bd909b3d5c" (UID: "6c4e808d-2f6c-4882-8cf1-32bd909b3d5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.773725 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e027da10-b05a-4f28-b1d0-763534dcef95" (UID: "e027da10-b05a-4f28-b1d0-763534dcef95"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.775498 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-scripts" (OuterVolumeSpecName: "scripts") pod "0f4e3d1b-bc68-4384-9d6d-b4712e543629" (UID: "0f4e3d1b-bc68-4384-9d6d-b4712e543629"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.776545 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f4e3d1b-bc68-4384-9d6d-b4712e543629-kube-api-access-r59rb" (OuterVolumeSpecName: "kube-api-access-r59rb") pod "0f4e3d1b-bc68-4384-9d6d-b4712e543629" (UID: "0f4e3d1b-bc68-4384-9d6d-b4712e543629"). InnerVolumeSpecName "kube-api-access-r59rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.794512 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-kube-api-access-xf6jk" (OuterVolumeSpecName: "kube-api-access-xf6jk") pod "6c4e808d-2f6c-4882-8cf1-32bd909b3d5c" (UID: "6c4e808d-2f6c-4882-8cf1-32bd909b3d5c"). InnerVolumeSpecName "kube-api-access-xf6jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.799918 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f4e3d1b-bc68-4384-9d6d-b4712e543629" (UID: "0f4e3d1b-bc68-4384-9d6d-b4712e543629"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.806553 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-config-data" (OuterVolumeSpecName: "config-data") pod "6c4e808d-2f6c-4882-8cf1-32bd909b3d5c" (UID: "6c4e808d-2f6c-4882-8cf1-32bd909b3d5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.806579 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-config-data" (OuterVolumeSpecName: "config-data") pod "0f4e3d1b-bc68-4384-9d6d-b4712e543629" (UID: "0f4e3d1b-bc68-4384-9d6d-b4712e543629"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.811074 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c4e808d-2f6c-4882-8cf1-32bd909b3d5c" (UID: "6c4e808d-2f6c-4882-8cf1-32bd909b3d5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877612 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877653 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877664 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877674 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877683 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e027da10-b05a-4f28-b1d0-763534dcef95-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877691 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877700 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r59rb\" (UniqueName: \"kubernetes.io/projected/0f4e3d1b-bc68-4384-9d6d-b4712e543629-kube-api-access-r59rb\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877709 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877717 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf6jk\" (UniqueName: \"kubernetes.io/projected/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c-kube-api-access-xf6jk\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:55 crc kubenswrapper[4851]: I0223 13:28:55.877725 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f4e3d1b-bc68-4384-9d6d-b4712e543629-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.101304 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" event={"ID":"e027da10-b05a-4f28-b1d0-763534dcef95","Type":"ContainerDied","Data":"141a132630d995b318ee9857ad9a6da8e9296aa332db95a893a1b8dbdb01c618"} Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.101394 4851 scope.go:117] "RemoveContainer" containerID="f0ad73ef9bc207ec19ccd223f0a31bbd576991976f71e9c6a3753fe48f19b9b2" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.101551 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-d82rc" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.120686 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7l76c" event={"ID":"0f4e3d1b-bc68-4384-9d6d-b4712e543629","Type":"ContainerDied","Data":"19f1fee9f4340c988842d61dbb86d82e52bd51b647208b50866f2e95faa3dded"} Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.120728 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19f1fee9f4340c988842d61dbb86d82e52bd51b647208b50866f2e95faa3dded" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.120800 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7l76c" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.132622 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cz56r" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.132670 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cz56r" event={"ID":"6c4e808d-2f6c-4882-8cf1-32bd909b3d5c","Type":"ContainerDied","Data":"ad3809a2d9e0b172afade60d192be3e8af9f584e12492e3dd71b322127f7bae3"} Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.132693 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad3809a2d9e0b172afade60d192be3e8af9f584e12492e3dd71b322127f7bae3" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.213962 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 13:28:56 crc kubenswrapper[4851]: E0223 13:28:56.214897 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e027da10-b05a-4f28-b1d0-763534dcef95" containerName="dnsmasq-dns" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.214913 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e027da10-b05a-4f28-b1d0-763534dcef95" containerName="dnsmasq-dns" Feb 23 13:28:56 crc kubenswrapper[4851]: E0223 13:28:56.214936 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e027da10-b05a-4f28-b1d0-763534dcef95" containerName="init" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.214942 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e027da10-b05a-4f28-b1d0-763534dcef95" containerName="init" Feb 23 13:28:56 crc kubenswrapper[4851]: E0223 13:28:56.214969 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4e808d-2f6c-4882-8cf1-32bd909b3d5c" containerName="nova-cell1-conductor-db-sync" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.214975 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4e808d-2f6c-4882-8cf1-32bd909b3d5c" containerName="nova-cell1-conductor-db-sync" Feb 23 13:28:56 crc kubenswrapper[4851]: E0223 13:28:56.215004 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f4e3d1b-bc68-4384-9d6d-b4712e543629" containerName="nova-manage" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.215012 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f4e3d1b-bc68-4384-9d6d-b4712e543629" containerName="nova-manage" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.215972 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f4e3d1b-bc68-4384-9d6d-b4712e543629" containerName="nova-manage" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.216030 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4e808d-2f6c-4882-8cf1-32bd909b3d5c" containerName="nova-cell1-conductor-db-sync" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.216065 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="e027da10-b05a-4f28-b1d0-763534dcef95" containerName="dnsmasq-dns" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.217425 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.228197 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.231240 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.289126 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.291559 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-log" containerID="cri-o://03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b" gracePeriod=30 Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.292445 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-api" containerID="cri-o://7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a" gracePeriod=30 Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.292960 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c64fb8-ea75-4ede-b285-7aeb434b96d4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b5c64fb8-ea75-4ede-b285-7aeb434b96d4\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.293095 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5c64fb8-ea75-4ede-b285-7aeb434b96d4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b5c64fb8-ea75-4ede-b285-7aeb434b96d4\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.293141 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spl94\" (UniqueName: \"kubernetes.io/projected/b5c64fb8-ea75-4ede-b285-7aeb434b96d4-kube-api-access-spl94\") pod \"nova-cell1-conductor-0\" (UID: \"b5c64fb8-ea75-4ede-b285-7aeb434b96d4\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.310712 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.394523 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c64fb8-ea75-4ede-b285-7aeb434b96d4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b5c64fb8-ea75-4ede-b285-7aeb434b96d4\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.394611 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5c64fb8-ea75-4ede-b285-7aeb434b96d4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b5c64fb8-ea75-4ede-b285-7aeb434b96d4\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.394635 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spl94\" (UniqueName: \"kubernetes.io/projected/b5c64fb8-ea75-4ede-b285-7aeb434b96d4-kube-api-access-spl94\") pod \"nova-cell1-conductor-0\" (UID: \"b5c64fb8-ea75-4ede-b285-7aeb434b96d4\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.402023 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5c64fb8-ea75-4ede-b285-7aeb434b96d4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b5c64fb8-ea75-4ede-b285-7aeb434b96d4\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.402494 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5c64fb8-ea75-4ede-b285-7aeb434b96d4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b5c64fb8-ea75-4ede-b285-7aeb434b96d4\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.424943 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spl94\" (UniqueName: \"kubernetes.io/projected/b5c64fb8-ea75-4ede-b285-7aeb434b96d4-kube-api-access-spl94\") pod \"nova-cell1-conductor-0\" (UID: \"b5c64fb8-ea75-4ede-b285-7aeb434b96d4\") " pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.497343 4851 scope.go:117] "RemoveContainer" containerID="0893328bb76c5d71e956f01f489bd8fa89d4317e01b224a63b39fe9a34c44d8b" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.508733 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.530188 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d82rc"] Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.545112 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-d82rc"] Feb 23 13:28:56 crc kubenswrapper[4851]: I0223 13:28:56.965893 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 13:28:56 crc kubenswrapper[4851]: W0223 13:28:56.966733 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5c64fb8_ea75_4ede_b285_7aeb434b96d4.slice/crio-92c1d6675ae1eb1193e3aa916e55745eef224680d5907463c1bd4933c82185bf WatchSource:0}: Error finding container 92c1d6675ae1eb1193e3aa916e55745eef224680d5907463c1bd4933c82185bf: Status 404 returned error can't find the container with id 92c1d6675ae1eb1193e3aa916e55745eef224680d5907463c1bd4933c82185bf Feb 23 13:28:57 crc kubenswrapper[4851]: I0223 13:28:57.145313 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b5c64fb8-ea75-4ede-b285-7aeb434b96d4","Type":"ContainerStarted","Data":"b037b5f1b487df679c1fde03947489a9138c94d7effb72d42ca147030d1f2954"} Feb 23 13:28:57 crc kubenswrapper[4851]: I0223 13:28:57.146636 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 23 13:28:57 crc kubenswrapper[4851]: I0223 13:28:57.146737 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b5c64fb8-ea75-4ede-b285-7aeb434b96d4","Type":"ContainerStarted","Data":"92c1d6675ae1eb1193e3aa916e55745eef224680d5907463c1bd4933c82185bf"} Feb 23 13:28:57 crc kubenswrapper[4851]: I0223 13:28:57.152352 4851 generic.go:334] "Generic (PLEG): container finished" podID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerID="03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b" exitCode=143 Feb 23 13:28:57 crc kubenswrapper[4851]: I0223 13:28:57.152368 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eed86558-2053-47b0-9cb8-0cae6602c52d","Type":"ContainerDied","Data":"03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b"} Feb 23 13:28:57 crc kubenswrapper[4851]: I0223 13:28:57.153440 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8bee9394-d919-4f9a-8b28-4ae318bcf0a7" containerName="nova-scheduler-scheduler" containerID="cri-o://b11d7df342491cd095e77ec30f2eb74397c86171d421345ee72a63ca00340e7d" gracePeriod=30 Feb 23 13:28:57 crc kubenswrapper[4851]: I0223 13:28:57.978741 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e027da10-b05a-4f28-b1d0-763534dcef95" path="/var/lib/kubelet/pods/e027da10-b05a-4f28-b1d0-763534dcef95/volumes" Feb 23 13:28:59 crc kubenswrapper[4851]: E0223 13:28:59.430551 4851 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b11d7df342491cd095e77ec30f2eb74397c86171d421345ee72a63ca00340e7d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 13:28:59 crc kubenswrapper[4851]: E0223 13:28:59.432094 4851 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b11d7df342491cd095e77ec30f2eb74397c86171d421345ee72a63ca00340e7d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 13:28:59 crc kubenswrapper[4851]: E0223 13:28:59.433433 4851 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b11d7df342491cd095e77ec30f2eb74397c86171d421345ee72a63ca00340e7d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 13:28:59 crc kubenswrapper[4851]: E0223 13:28:59.433463 4851 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8bee9394-d919-4f9a-8b28-4ae318bcf0a7" containerName="nova-scheduler-scheduler" Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.186916 4851 generic.go:334] "Generic (PLEG): container finished" podID="8bee9394-d919-4f9a-8b28-4ae318bcf0a7" containerID="b11d7df342491cd095e77ec30f2eb74397c86171d421345ee72a63ca00340e7d" exitCode=0 Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.186959 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8bee9394-d919-4f9a-8b28-4ae318bcf0a7","Type":"ContainerDied","Data":"b11d7df342491cd095e77ec30f2eb74397c86171d421345ee72a63ca00340e7d"} Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.518615 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.540379 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.540360536 podStartE2EDuration="4.540360536s" podCreationTimestamp="2026-02-23 13:28:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:28:57.164019864 +0000 UTC m=+1291.845723562" watchObservedRunningTime="2026-02-23 13:29:00.540360536 +0000 UTC m=+1295.222064214" Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.568187 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74dkv\" (UniqueName: \"kubernetes.io/projected/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-kube-api-access-74dkv\") pod \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.568304 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-combined-ca-bundle\") pod \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.568379 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-config-data\") pod \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\" (UID: \"8bee9394-d919-4f9a-8b28-4ae318bcf0a7\") " Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.576628 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-kube-api-access-74dkv" (OuterVolumeSpecName: "kube-api-access-74dkv") pod "8bee9394-d919-4f9a-8b28-4ae318bcf0a7" (UID: "8bee9394-d919-4f9a-8b28-4ae318bcf0a7"). InnerVolumeSpecName "kube-api-access-74dkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.600310 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bee9394-d919-4f9a-8b28-4ae318bcf0a7" (UID: "8bee9394-d919-4f9a-8b28-4ae318bcf0a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.622454 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-config-data" (OuterVolumeSpecName: "config-data") pod "8bee9394-d919-4f9a-8b28-4ae318bcf0a7" (UID: "8bee9394-d919-4f9a-8b28-4ae318bcf0a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.669902 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74dkv\" (UniqueName: \"kubernetes.io/projected/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-kube-api-access-74dkv\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.669947 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:00 crc kubenswrapper[4851]: I0223 13:29:00.669960 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bee9394-d919-4f9a-8b28-4ae318bcf0a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.200521 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8bee9394-d919-4f9a-8b28-4ae318bcf0a7","Type":"ContainerDied","Data":"6c857c3c429ec8f0b0147a67b535aa5772d36c2245b79a38a2df3a9f018c52d7"} Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.200902 4851 scope.go:117] "RemoveContainer" containerID="b11d7df342491cd095e77ec30f2eb74397c86171d421345ee72a63ca00340e7d" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.200743 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.233825 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.244119 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.266975 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:01 crc kubenswrapper[4851]: E0223 13:29:01.267399 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bee9394-d919-4f9a-8b28-4ae318bcf0a7" containerName="nova-scheduler-scheduler" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.267417 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bee9394-d919-4f9a-8b28-4ae318bcf0a7" containerName="nova-scheduler-scheduler" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.267628 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bee9394-d919-4f9a-8b28-4ae318bcf0a7" containerName="nova-scheduler-scheduler" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.268237 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.277133 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.279886 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.385192 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhpc\" (UniqueName: \"kubernetes.io/projected/0fc2c4d6-01da-4560-a303-af51b362022b-kube-api-access-2zhpc\") pod \"nova-scheduler-0\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.385250 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-config-data\") pod \"nova-scheduler-0\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.385314 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.486859 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhpc\" (UniqueName: \"kubernetes.io/projected/0fc2c4d6-01da-4560-a303-af51b362022b-kube-api-access-2zhpc\") pod \"nova-scheduler-0\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.486919 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-config-data\") pod \"nova-scheduler-0\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.486945 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.493180 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.495935 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-config-data\") pod \"nova-scheduler-0\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.502804 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhpc\" (UniqueName: \"kubernetes.io/projected/0fc2c4d6-01da-4560-a303-af51b362022b-kube-api-access-2zhpc\") pod \"nova-scheduler-0\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.586957 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.981100 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bee9394-d919-4f9a-8b28-4ae318bcf0a7" path="/var/lib/kubelet/pods/8bee9394-d919-4f9a-8b28-4ae318bcf0a7/volumes" Feb 23 13:29:01 crc kubenswrapper[4851]: W0223 13:29:01.998495 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fc2c4d6_01da_4560_a303_af51b362022b.slice/crio-b4ae6ff0662358fc3ea64f431fe33de7d5ba2e1400842c6aa552ba8f9d59febc WatchSource:0}: Error finding container b4ae6ff0662358fc3ea64f431fe33de7d5ba2e1400842c6aa552ba8f9d59febc: Status 404 returned error can't find the container with id b4ae6ff0662358fc3ea64f431fe33de7d5ba2e1400842c6aa552ba8f9d59febc Feb 23 13:29:01 crc kubenswrapper[4851]: I0223 13:29:01.999300 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.088191 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.197992 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pwk9\" (UniqueName: \"kubernetes.io/projected/eed86558-2053-47b0-9cb8-0cae6602c52d-kube-api-access-6pwk9\") pod \"eed86558-2053-47b0-9cb8-0cae6602c52d\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.198085 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-combined-ca-bundle\") pod \"eed86558-2053-47b0-9cb8-0cae6602c52d\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.198269 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-config-data\") pod \"eed86558-2053-47b0-9cb8-0cae6602c52d\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.198297 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed86558-2053-47b0-9cb8-0cae6602c52d-logs\") pod \"eed86558-2053-47b0-9cb8-0cae6602c52d\" (UID: \"eed86558-2053-47b0-9cb8-0cae6602c52d\") " Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.199134 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eed86558-2053-47b0-9cb8-0cae6602c52d-logs" (OuterVolumeSpecName: "logs") pod "eed86558-2053-47b0-9cb8-0cae6602c52d" (UID: "eed86558-2053-47b0-9cb8-0cae6602c52d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.202105 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eed86558-2053-47b0-9cb8-0cae6602c52d-kube-api-access-6pwk9" (OuterVolumeSpecName: "kube-api-access-6pwk9") pod "eed86558-2053-47b0-9cb8-0cae6602c52d" (UID: "eed86558-2053-47b0-9cb8-0cae6602c52d"). InnerVolumeSpecName "kube-api-access-6pwk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.211462 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0fc2c4d6-01da-4560-a303-af51b362022b","Type":"ContainerStarted","Data":"6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03"} Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.211516 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0fc2c4d6-01da-4560-a303-af51b362022b","Type":"ContainerStarted","Data":"b4ae6ff0662358fc3ea64f431fe33de7d5ba2e1400842c6aa552ba8f9d59febc"} Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.213829 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.214084 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eed86558-2053-47b0-9cb8-0cae6602c52d","Type":"ContainerDied","Data":"7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a"} Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.214137 4851 scope.go:117] "RemoveContainer" containerID="7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.213694 4851 generic.go:334] "Generic (PLEG): container finished" podID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerID="7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a" exitCode=0 Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.214468 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eed86558-2053-47b0-9cb8-0cae6602c52d","Type":"ContainerDied","Data":"c86dbe91991b65b79768ef998df918345eff3bde8cf8fea5bf9516b04537efea"} Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.222122 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-config-data" (OuterVolumeSpecName: "config-data") pod "eed86558-2053-47b0-9cb8-0cae6602c52d" (UID: "eed86558-2053-47b0-9cb8-0cae6602c52d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.231503 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eed86558-2053-47b0-9cb8-0cae6602c52d" (UID: "eed86558-2053-47b0-9cb8-0cae6602c52d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.232900 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.232882393 podStartE2EDuration="1.232882393s" podCreationTimestamp="2026-02-23 13:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:02.228192281 +0000 UTC m=+1296.909895969" watchObservedRunningTime="2026-02-23 13:29:02.232882393 +0000 UTC m=+1296.914586071" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.240099 4851 scope.go:117] "RemoveContainer" containerID="03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.257550 4851 scope.go:117] "RemoveContainer" containerID="7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a" Feb 23 13:29:02 crc kubenswrapper[4851]: E0223 13:29:02.258187 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a\": container with ID starting with 7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a not found: ID does not exist" containerID="7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.258242 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a"} err="failed to get container status \"7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a\": rpc error: code = NotFound desc = could not find container \"7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a\": container with ID starting with 7446cc2d3613702b277f14a7a3dfa6355a6f6215574fa41b0a852e24ed2e588a not found: ID does not exist" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.258272 4851 scope.go:117] "RemoveContainer" containerID="03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b" Feb 23 13:29:02 crc kubenswrapper[4851]: E0223 13:29:02.258721 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b\": container with ID starting with 03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b not found: ID does not exist" containerID="03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.258773 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b"} err="failed to get container status \"03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b\": rpc error: code = NotFound desc = could not find container \"03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b\": container with ID starting with 03183e4c2665eb7400eb65fb3ef560b8df3bba665c244c6937e8fe90bf86983b not found: ID does not exist" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.300458 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.300492 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed86558-2053-47b0-9cb8-0cae6602c52d-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.300507 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pwk9\" (UniqueName: \"kubernetes.io/projected/eed86558-2053-47b0-9cb8-0cae6602c52d-kube-api-access-6pwk9\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.300517 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed86558-2053-47b0-9cb8-0cae6602c52d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.602749 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.644504 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.670948 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:02 crc kubenswrapper[4851]: E0223 13:29:02.671366 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-api" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.671384 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-api" Feb 23 13:29:02 crc kubenswrapper[4851]: E0223 13:29:02.671417 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-log" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.671423 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-log" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.671577 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-api" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.671599 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" containerName="nova-api-log" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.672471 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.675100 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.686422 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.809848 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6dk4\" (UniqueName: \"kubernetes.io/projected/b14a9f03-61a4-44f3-b79d-771701024159-kube-api-access-t6dk4\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.809969 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b14a9f03-61a4-44f3-b79d-771701024159-logs\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.810036 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-config-data\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.810069 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.911272 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6dk4\" (UniqueName: \"kubernetes.io/projected/b14a9f03-61a4-44f3-b79d-771701024159-kube-api-access-t6dk4\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.911429 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b14a9f03-61a4-44f3-b79d-771701024159-logs\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.911482 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-config-data\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.911513 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.911988 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b14a9f03-61a4-44f3-b79d-771701024159-logs\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.916374 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.917948 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-config-data\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.929720 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6dk4\" (UniqueName: \"kubernetes.io/projected/b14a9f03-61a4-44f3-b79d-771701024159-kube-api-access-t6dk4\") pod \"nova-api-0\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " pod="openstack/nova-api-0" Feb 23 13:29:02 crc kubenswrapper[4851]: I0223 13:29:02.993780 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:03 crc kubenswrapper[4851]: I0223 13:29:03.412906 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:03 crc kubenswrapper[4851]: W0223 13:29:03.412890 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb14a9f03_61a4_44f3_b79d_771701024159.slice/crio-c595c57abd65a04eecda31b3326390577a38b9a09b7d367cb6db9ac4a415ec89 WatchSource:0}: Error finding container c595c57abd65a04eecda31b3326390577a38b9a09b7d367cb6db9ac4a415ec89: Status 404 returned error can't find the container with id c595c57abd65a04eecda31b3326390577a38b9a09b7d367cb6db9ac4a415ec89 Feb 23 13:29:03 crc kubenswrapper[4851]: I0223 13:29:03.980278 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eed86558-2053-47b0-9cb8-0cae6602c52d" path="/var/lib/kubelet/pods/eed86558-2053-47b0-9cb8-0cae6602c52d/volumes" Feb 23 13:29:04 crc kubenswrapper[4851]: I0223 13:29:04.232422 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b14a9f03-61a4-44f3-b79d-771701024159","Type":"ContainerStarted","Data":"d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350"} Feb 23 13:29:04 crc kubenswrapper[4851]: I0223 13:29:04.232464 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b14a9f03-61a4-44f3-b79d-771701024159","Type":"ContainerStarted","Data":"894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698"} Feb 23 13:29:04 crc kubenswrapper[4851]: I0223 13:29:04.232474 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b14a9f03-61a4-44f3-b79d-771701024159","Type":"ContainerStarted","Data":"c595c57abd65a04eecda31b3326390577a38b9a09b7d367cb6db9ac4a415ec89"} Feb 23 13:29:04 crc kubenswrapper[4851]: I0223 13:29:04.254377 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.254356978 podStartE2EDuration="2.254356978s" podCreationTimestamp="2026-02-23 13:29:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:04.246175538 +0000 UTC m=+1298.927879226" watchObservedRunningTime="2026-02-23 13:29:04.254356978 +0000 UTC m=+1298.936060656" Feb 23 13:29:06 crc kubenswrapper[4851]: I0223 13:29:06.533071 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 23 13:29:06 crc kubenswrapper[4851]: I0223 13:29:06.587145 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 13:29:11 crc kubenswrapper[4851]: I0223 13:29:11.587857 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 13:29:11 crc kubenswrapper[4851]: I0223 13:29:11.612038 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 13:29:11 crc kubenswrapper[4851]: I0223 13:29:11.925049 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:29:11 crc kubenswrapper[4851]: I0223 13:29:11.925131 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:29:12 crc kubenswrapper[4851]: I0223 13:29:12.325444 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 13:29:12 crc kubenswrapper[4851]: I0223 13:29:12.995034 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:29:12 crc kubenswrapper[4851]: I0223 13:29:12.995945 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:29:14 crc kubenswrapper[4851]: I0223 13:29:14.076521 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:29:14 crc kubenswrapper[4851]: I0223 13:29:14.076534 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.362031 4851 generic.go:334] "Generic (PLEG): container finished" podID="43e1da7c-2638-42c6-9852-f5882a1acce3" containerID="9f8e92cf331d342af949ca1b98b5942dd0654e6b237998d3ac175714b68ba349" exitCode=137 Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.362293 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"43e1da7c-2638-42c6-9852-f5882a1acce3","Type":"ContainerDied","Data":"9f8e92cf331d342af949ca1b98b5942dd0654e6b237998d3ac175714b68ba349"} Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.362714 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"43e1da7c-2638-42c6-9852-f5882a1acce3","Type":"ContainerDied","Data":"881099245ebed660a5b4a99557985226476fad3dc691026b1d9dbbec9c679b54"} Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.362747 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="881099245ebed660a5b4a99557985226476fad3dc691026b1d9dbbec9c679b54" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.367902 4851 generic.go:334] "Generic (PLEG): container finished" podID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerID="a7e8111d37499d62c119f7636ae287b26f914e06057dc63662efc4f2d85beccc" exitCode=137 Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.367948 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7","Type":"ContainerDied","Data":"a7e8111d37499d62c119f7636ae287b26f914e06057dc63662efc4f2d85beccc"} Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.367979 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7","Type":"ContainerDied","Data":"6fe2c71e5bb04eef88846a8691c22018bb2421eec97476b55690b495aa13002c"} Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.367993 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fe2c71e5bb04eef88846a8691c22018bb2421eec97476b55690b495aa13002c" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.409032 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.419778 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.526679 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-combined-ca-bundle\") pod \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.526840 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5zhb\" (UniqueName: \"kubernetes.io/projected/43e1da7c-2638-42c6-9852-f5882a1acce3-kube-api-access-s5zhb\") pod \"43e1da7c-2638-42c6-9852-f5882a1acce3\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.526943 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmq2b\" (UniqueName: \"kubernetes.io/projected/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-kube-api-access-xmq2b\") pod \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.527013 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-logs\") pod \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.527035 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-combined-ca-bundle\") pod \"43e1da7c-2638-42c6-9852-f5882a1acce3\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.527058 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-config-data\") pod \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\" (UID: \"6a65ea1f-7d6e-47b0-b98a-0e36df8281c7\") " Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.527125 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-config-data\") pod \"43e1da7c-2638-42c6-9852-f5882a1acce3\" (UID: \"43e1da7c-2638-42c6-9852-f5882a1acce3\") " Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.528027 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-logs" (OuterVolumeSpecName: "logs") pod "6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" (UID: "6a65ea1f-7d6e-47b0-b98a-0e36df8281c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.533046 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e1da7c-2638-42c6-9852-f5882a1acce3-kube-api-access-s5zhb" (OuterVolumeSpecName: "kube-api-access-s5zhb") pod "43e1da7c-2638-42c6-9852-f5882a1acce3" (UID: "43e1da7c-2638-42c6-9852-f5882a1acce3"). InnerVolumeSpecName "kube-api-access-s5zhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.534417 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-kube-api-access-xmq2b" (OuterVolumeSpecName: "kube-api-access-xmq2b") pod "6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" (UID: "6a65ea1f-7d6e-47b0-b98a-0e36df8281c7"). InnerVolumeSpecName "kube-api-access-xmq2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.558470 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43e1da7c-2638-42c6-9852-f5882a1acce3" (UID: "43e1da7c-2638-42c6-9852-f5882a1acce3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.560950 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" (UID: "6a65ea1f-7d6e-47b0-b98a-0e36df8281c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.561482 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-config-data" (OuterVolumeSpecName: "config-data") pod "43e1da7c-2638-42c6-9852-f5882a1acce3" (UID: "43e1da7c-2638-42c6-9852-f5882a1acce3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.562981 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-config-data" (OuterVolumeSpecName: "config-data") pod "6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" (UID: "6a65ea1f-7d6e-47b0-b98a-0e36df8281c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.628946 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5zhb\" (UniqueName: \"kubernetes.io/projected/43e1da7c-2638-42c6-9852-f5882a1acce3-kube-api-access-s5zhb\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.628984 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmq2b\" (UniqueName: \"kubernetes.io/projected/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-kube-api-access-xmq2b\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.628994 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.629004 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.629012 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.629021 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43e1da7c-2638-42c6-9852-f5882a1acce3-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.629028 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:19 crc kubenswrapper[4851]: I0223 13:29:19.737899 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.375470 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.375521 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.398480 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.410293 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.420699 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.436351 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.457955 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:29:20 crc kubenswrapper[4851]: E0223 13:29:20.458369 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerName="nova-metadata-log" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.458385 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerName="nova-metadata-log" Feb 23 13:29:20 crc kubenswrapper[4851]: E0223 13:29:20.458404 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43e1da7c-2638-42c6-9852-f5882a1acce3" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.458410 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="43e1da7c-2638-42c6-9852-f5882a1acce3" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 13:29:20 crc kubenswrapper[4851]: E0223 13:29:20.458438 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerName="nova-metadata-metadata" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.458444 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerName="nova-metadata-metadata" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.458615 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e1da7c-2638-42c6-9852-f5882a1acce3" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.458627 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerName="nova-metadata-log" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.458644 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" containerName="nova-metadata-metadata" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.459235 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.462828 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.470064 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.470308 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.480385 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.481856 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.492000 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.492083 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.535066 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.604460 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648197 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648253 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648278 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648308 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648359 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648389 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrj4\" (UniqueName: \"kubernetes.io/projected/0284ac99-e112-44af-b198-eb9d42478701-kube-api-access-4mrj4\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648433 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648481 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88fh2\" (UniqueName: \"kubernetes.io/projected/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-kube-api-access-88fh2\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648504 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-config-data\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.648525 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-logs\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.749825 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.749867 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.749893 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.749919 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.749944 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.749971 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mrj4\" (UniqueName: \"kubernetes.io/projected/0284ac99-e112-44af-b198-eb9d42478701-kube-api-access-4mrj4\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.749996 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.750037 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88fh2\" (UniqueName: \"kubernetes.io/projected/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-kube-api-access-88fh2\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.750054 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-config-data\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.750077 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-logs\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.750554 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-logs\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.757199 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.757256 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.757304 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-config-data\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.757695 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.758606 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.759853 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.767884 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88fh2\" (UniqueName: \"kubernetes.io/projected/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-kube-api-access-88fh2\") pod \"nova-metadata-0\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " pod="openstack/nova-metadata-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.768670 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0284ac99-e112-44af-b198-eb9d42478701-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.770996 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mrj4\" (UniqueName: \"kubernetes.io/projected/0284ac99-e112-44af-b198-eb9d42478701-kube-api-access-4mrj4\") pod \"nova-cell1-novncproxy-0\" (UID: \"0284ac99-e112-44af-b198-eb9d42478701\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.780777 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:20 crc kubenswrapper[4851]: I0223 13:29:20.805740 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:29:21 crc kubenswrapper[4851]: I0223 13:29:21.228838 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 13:29:21 crc kubenswrapper[4851]: W0223 13:29:21.245272 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0284ac99_e112_44af_b198_eb9d42478701.slice/crio-1945fdad4e74283c61f1443225e707e31b3ae11a775ca88449c415e9aa4a2661 WatchSource:0}: Error finding container 1945fdad4e74283c61f1443225e707e31b3ae11a775ca88449c415e9aa4a2661: Status 404 returned error can't find the container with id 1945fdad4e74283c61f1443225e707e31b3ae11a775ca88449c415e9aa4a2661 Feb 23 13:29:21 crc kubenswrapper[4851]: W0223 13:29:21.286945 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6b495ac_e87e_4db2_a35f_f9efce68ebc7.slice/crio-c9f78f6d80ae4cb555535be9284fba0ca14d2593a46477a8fc058cd37cd85b54 WatchSource:0}: Error finding container c9f78f6d80ae4cb555535be9284fba0ca14d2593a46477a8fc058cd37cd85b54: Status 404 returned error can't find the container with id c9f78f6d80ae4cb555535be9284fba0ca14d2593a46477a8fc058cd37cd85b54 Feb 23 13:29:21 crc kubenswrapper[4851]: I0223 13:29:21.291495 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:21 crc kubenswrapper[4851]: I0223 13:29:21.385483 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0284ac99-e112-44af-b198-eb9d42478701","Type":"ContainerStarted","Data":"1945fdad4e74283c61f1443225e707e31b3ae11a775ca88449c415e9aa4a2661"} Feb 23 13:29:21 crc kubenswrapper[4851]: I0223 13:29:21.386523 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6b495ac-e87e-4db2-a35f-f9efce68ebc7","Type":"ContainerStarted","Data":"c9f78f6d80ae4cb555535be9284fba0ca14d2593a46477a8fc058cd37cd85b54"} Feb 23 13:29:21 crc kubenswrapper[4851]: I0223 13:29:21.980863 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43e1da7c-2638-42c6-9852-f5882a1acce3" path="/var/lib/kubelet/pods/43e1da7c-2638-42c6-9852-f5882a1acce3/volumes" Feb 23 13:29:21 crc kubenswrapper[4851]: I0223 13:29:21.981884 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a65ea1f-7d6e-47b0-b98a-0e36df8281c7" path="/var/lib/kubelet/pods/6a65ea1f-7d6e-47b0-b98a-0e36df8281c7/volumes" Feb 23 13:29:22 crc kubenswrapper[4851]: I0223 13:29:22.396135 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6b495ac-e87e-4db2-a35f-f9efce68ebc7","Type":"ContainerStarted","Data":"ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197"} Feb 23 13:29:22 crc kubenswrapper[4851]: I0223 13:29:22.396182 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6b495ac-e87e-4db2-a35f-f9efce68ebc7","Type":"ContainerStarted","Data":"8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59"} Feb 23 13:29:22 crc kubenswrapper[4851]: I0223 13:29:22.398797 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0284ac99-e112-44af-b198-eb9d42478701","Type":"ContainerStarted","Data":"12bf60666499416b89a9e487f872256e5e05611a812346bdda6ef6afc35261e3"} Feb 23 13:29:22 crc kubenswrapper[4851]: I0223 13:29:22.419713 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.419693291 podStartE2EDuration="2.419693291s" podCreationTimestamp="2026-02-23 13:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:22.415356979 +0000 UTC m=+1317.097060657" watchObservedRunningTime="2026-02-23 13:29:22.419693291 +0000 UTC m=+1317.101396969" Feb 23 13:29:22 crc kubenswrapper[4851]: I0223 13:29:22.440424 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.440405673 podStartE2EDuration="2.440405673s" podCreationTimestamp="2026-02-23 13:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:22.429346902 +0000 UTC m=+1317.111050600" watchObservedRunningTime="2026-02-23 13:29:22.440405673 +0000 UTC m=+1317.122109351" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.003016 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.003286 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.004300 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.008575 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.381418 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.381678 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3" containerName="kube-state-metrics" containerID="cri-o://1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d" gracePeriod=30 Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.406245 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.410533 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.589362 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmqvl"] Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.596444 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.623420 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmqvl"] Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.709356 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.709619 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.709731 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.709786 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78znt\" (UniqueName: \"kubernetes.io/projected/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-kube-api-access-78znt\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.709884 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-config\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.709937 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.811311 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78znt\" (UniqueName: \"kubernetes.io/projected/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-kube-api-access-78znt\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.811410 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-config\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.811445 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.811506 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.811592 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.811633 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.812443 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.812549 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.812615 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.812318 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-config\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.813095 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.829514 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78znt\" (UniqueName: \"kubernetes.io/projected/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-kube-api-access-78znt\") pod \"dnsmasq-dns-89c5cd4d5-mmqvl\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.918550 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:23 crc kubenswrapper[4851]: I0223 13:29:23.936071 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.116952 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x8ww\" (UniqueName: \"kubernetes.io/projected/aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3-kube-api-access-5x8ww\") pod \"aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3\" (UID: \"aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3\") " Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.123691 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3-kube-api-access-5x8ww" (OuterVolumeSpecName: "kube-api-access-5x8ww") pod "aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3" (UID: "aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3"). InnerVolumeSpecName "kube-api-access-5x8ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.219850 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x8ww\" (UniqueName: \"kubernetes.io/projected/aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3-kube-api-access-5x8ww\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.430778 4851 generic.go:334] "Generic (PLEG): container finished" podID="aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3" containerID="1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d" exitCode=2 Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.430867 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3","Type":"ContainerDied","Data":"1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d"} Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.431154 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3","Type":"ContainerDied","Data":"ec1a5785963f6ac8a8c5b6d828e9e4a7a4248257dbd748e6dec03bd76ebbf4e4"} Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.431173 4851 scope.go:117] "RemoveContainer" containerID="1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.433313 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.464516 4851 scope.go:117] "RemoveContainer" containerID="1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d" Feb 23 13:29:24 crc kubenswrapper[4851]: E0223 13:29:24.465893 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d\": container with ID starting with 1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d not found: ID does not exist" containerID="1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.465937 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d"} err="failed to get container status \"1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d\": rpc error: code = NotFound desc = could not find container \"1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d\": container with ID starting with 1a4363a5089645fc5e173585cae0580794d9852d2dfdb1a43abbde20a39dd45d not found: ID does not exist" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.471719 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmqvl"] Feb 23 13:29:24 crc kubenswrapper[4851]: W0223 13:29:24.482579 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f9baced_54f4_4e5e_ab82_6aed7824b9d7.slice/crio-f096e9335bbd7e6600753a2c41a221c2d0d9f110b709d386adfb7ba725b261c5 WatchSource:0}: Error finding container f096e9335bbd7e6600753a2c41a221c2d0d9f110b709d386adfb7ba725b261c5: Status 404 returned error can't find the container with id f096e9335bbd7e6600753a2c41a221c2d0d9f110b709d386adfb7ba725b261c5 Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.492600 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.507471 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.517295 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 13:29:24 crc kubenswrapper[4851]: E0223 13:29:24.517798 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3" containerName="kube-state-metrics" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.517821 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3" containerName="kube-state-metrics" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.518043 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3" containerName="kube-state-metrics" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.518862 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.522750 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.528772 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.560231 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.630427 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a55625-8b81-4ce9-afb2-2220598ce375-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.630495 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a55625-8b81-4ce9-afb2-2220598ce375-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.630533 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzll4\" (UniqueName: \"kubernetes.io/projected/a0a55625-8b81-4ce9-afb2-2220598ce375-kube-api-access-lzll4\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.630574 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a0a55625-8b81-4ce9-afb2-2220598ce375-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.733541 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a55625-8b81-4ce9-afb2-2220598ce375-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.734241 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a55625-8b81-4ce9-afb2-2220598ce375-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.734306 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzll4\" (UniqueName: \"kubernetes.io/projected/a0a55625-8b81-4ce9-afb2-2220598ce375-kube-api-access-lzll4\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.734405 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a0a55625-8b81-4ce9-afb2-2220598ce375-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.737174 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0a55625-8b81-4ce9-afb2-2220598ce375-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.737913 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0a55625-8b81-4ce9-afb2-2220598ce375-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.742652 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a0a55625-8b81-4ce9-afb2-2220598ce375-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:24 crc kubenswrapper[4851]: I0223 13:29:24.754545 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzll4\" (UniqueName: \"kubernetes.io/projected/a0a55625-8b81-4ce9-afb2-2220598ce375-kube-api-access-lzll4\") pod \"kube-state-metrics-0\" (UID: \"a0a55625-8b81-4ce9-afb2-2220598ce375\") " pod="openstack/kube-state-metrics-0" Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.025235 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.443231 4851 generic.go:334] "Generic (PLEG): container finished" podID="7f9baced-54f4-4e5e-ab82-6aed7824b9d7" containerID="39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279" exitCode=0 Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.443277 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" event={"ID":"7f9baced-54f4-4e5e-ab82-6aed7824b9d7","Type":"ContainerDied","Data":"39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279"} Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.443596 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" event={"ID":"7f9baced-54f4-4e5e-ab82-6aed7824b9d7","Type":"ContainerStarted","Data":"f096e9335bbd7e6600753a2c41a221c2d0d9f110b709d386adfb7ba725b261c5"} Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.445528 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.446073 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="ceilometer-central-agent" containerID="cri-o://d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87" gracePeriod=30 Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.446183 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="proxy-httpd" containerID="cri-o://be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90" gracePeriod=30 Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.446258 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="sg-core" containerID="cri-o://dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e" gracePeriod=30 Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.446252 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="ceilometer-notification-agent" containerID="cri-o://152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f" gracePeriod=30 Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.500450 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 13:29:25 crc kubenswrapper[4851]: W0223 13:29:25.536179 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0a55625_8b81_4ce9_afb2_2220598ce375.slice/crio-3e66d5eda4792584691148d5c2d3beedc91033fe299273ab02132a71b961a0da WatchSource:0}: Error finding container 3e66d5eda4792584691148d5c2d3beedc91033fe299273ab02132a71b961a0da: Status 404 returned error can't find the container with id 3e66d5eda4792584691148d5c2d3beedc91033fe299273ab02132a71b961a0da Feb 23 13:29:25 crc kubenswrapper[4851]: E0223 13:29:25.774160 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c8627db_a6c6_4cff_ac2c_dee51bd5eb6d.slice/crio-be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c8627db_a6c6_4cff_ac2c_dee51bd5eb6d.slice/crio-conmon-be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.781368 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.806421 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.806788 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:29:25 crc kubenswrapper[4851]: I0223 13:29:25.981374 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3" path="/var/lib/kubelet/pods/aa510cc6-f2ed-4aa7-81f4-61b2f5b4f2c3/volumes" Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.455109 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a0a55625-8b81-4ce9-afb2-2220598ce375","Type":"ContainerStarted","Data":"4db10b205d2b2fffe1b66cc5aebde4587a3337585cea2b0f16b593e69966f5c8"} Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.455162 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a0a55625-8b81-4ce9-afb2-2220598ce375","Type":"ContainerStarted","Data":"3e66d5eda4792584691148d5c2d3beedc91033fe299273ab02132a71b961a0da"} Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.455268 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.457374 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" event={"ID":"7f9baced-54f4-4e5e-ab82-6aed7824b9d7","Type":"ContainerStarted","Data":"5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f"} Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.457830 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.460014 4851 generic.go:334] "Generic (PLEG): container finished" podID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerID="be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90" exitCode=0 Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.460048 4851 generic.go:334] "Generic (PLEG): container finished" podID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerID="dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e" exitCode=2 Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.460060 4851 generic.go:334] "Generic (PLEG): container finished" podID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerID="d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87" exitCode=0 Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.460067 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerDied","Data":"be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90"} Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.460092 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerDied","Data":"dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e"} Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.460102 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerDied","Data":"d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87"} Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.475043 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.09755472 podStartE2EDuration="2.475025082s" podCreationTimestamp="2026-02-23 13:29:24 +0000 UTC" firstStartedPulling="2026-02-23 13:29:25.539317708 +0000 UTC m=+1320.221021386" lastFinishedPulling="2026-02-23 13:29:25.91678807 +0000 UTC m=+1320.598491748" observedRunningTime="2026-02-23 13:29:26.468702885 +0000 UTC m=+1321.150406583" watchObservedRunningTime="2026-02-23 13:29:26.475025082 +0000 UTC m=+1321.156728760" Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.493461 4851 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod6c4e808d-2f6c-4882-8cf1-32bd909b3d5c"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod6c4e808d-2f6c-4882-8cf1-32bd909b3d5c] : Timed out while waiting for systemd to remove kubepods-besteffort-pod6c4e808d_2f6c_4882_8cf1_32bd909b3d5c.slice" Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.786265 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" podStartSLOduration=3.786240215 podStartE2EDuration="3.786240215s" podCreationTimestamp="2026-02-23 13:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:26.49276043 +0000 UTC m=+1321.174464118" watchObservedRunningTime="2026-02-23 13:29:26.786240215 +0000 UTC m=+1321.467943903" Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.792757 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.793277 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-log" containerID="cri-o://894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698" gracePeriod=30 Feb 23 13:29:26 crc kubenswrapper[4851]: I0223 13:29:26.793438 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-api" containerID="cri-o://d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350" gracePeriod=30 Feb 23 13:29:27 crc kubenswrapper[4851]: I0223 13:29:27.470310 4851 generic.go:334] "Generic (PLEG): container finished" podID="b14a9f03-61a4-44f3-b79d-771701024159" containerID="894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698" exitCode=143 Feb 23 13:29:27 crc kubenswrapper[4851]: I0223 13:29:27.471219 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b14a9f03-61a4-44f3-b79d-771701024159","Type":"ContainerDied","Data":"894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698"} Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.005258 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.018144 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-config-data\") pod \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.018220 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqtgn\" (UniqueName: \"kubernetes.io/projected/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-kube-api-access-dqtgn\") pod \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.019060 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-combined-ca-bundle\") pod \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.019203 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-log-httpd\") pod \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.019259 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-run-httpd\") pod \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.019285 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-sg-core-conf-yaml\") pod \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.019412 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-scripts\") pod \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\" (UID: \"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d\") " Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.019668 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" (UID: "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.019844 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" (UID: "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.020171 4851 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.020196 4851 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.024175 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-kube-api-access-dqtgn" (OuterVolumeSpecName: "kube-api-access-dqtgn") pod "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" (UID: "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d"). InnerVolumeSpecName "kube-api-access-dqtgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.033471 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-scripts" (OuterVolumeSpecName: "scripts") pod "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" (UID: "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.079862 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" (UID: "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.108715 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" (UID: "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.121684 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.121713 4851 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.121723 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.121733 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqtgn\" (UniqueName: \"kubernetes.io/projected/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-kube-api-access-dqtgn\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.164516 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-config-data" (OuterVolumeSpecName: "config-data") pod "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" (UID: "8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.223984 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.480534 4851 generic.go:334] "Generic (PLEG): container finished" podID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerID="152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f" exitCode=0 Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.480703 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerDied","Data":"152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f"} Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.480802 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.481932 4851 scope.go:117] "RemoveContainer" containerID="be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.481808 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d","Type":"ContainerDied","Data":"673e78a8225aedbc7f8ac2915c4ae2fda92bb39545da4fd86800d0960beb8724"} Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.511640 4851 scope.go:117] "RemoveContainer" containerID="dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.522953 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.531202 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.584438 4851 scope.go:117] "RemoveContainer" containerID="152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.597061 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:28 crc kubenswrapper[4851]: E0223 13:29:28.597529 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="ceilometer-notification-agent" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.597552 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="ceilometer-notification-agent" Feb 23 13:29:28 crc kubenswrapper[4851]: E0223 13:29:28.597582 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="sg-core" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.597591 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="sg-core" Feb 23 13:29:28 crc kubenswrapper[4851]: E0223 13:29:28.597605 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="proxy-httpd" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.597612 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="proxy-httpd" Feb 23 13:29:28 crc kubenswrapper[4851]: E0223 13:29:28.597638 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="ceilometer-central-agent" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.597646 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="ceilometer-central-agent" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.604559 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="ceilometer-notification-agent" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.604634 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="sg-core" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.604656 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="ceilometer-central-agent" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.604774 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" containerName="proxy-httpd" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.607062 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.608519 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.610809 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.612594 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.612759 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.630688 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddskv\" (UniqueName: \"kubernetes.io/projected/b24d87d8-356c-426a-a2a1-58e345df5d9a-kube-api-access-ddskv\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.630744 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-scripts\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.630769 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.630816 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-run-httpd\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.630846 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-config-data\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.630891 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.630951 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.631030 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-log-httpd\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.637834 4851 scope.go:117] "RemoveContainer" containerID="d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.655025 4851 scope.go:117] "RemoveContainer" containerID="be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90" Feb 23 13:29:28 crc kubenswrapper[4851]: E0223 13:29:28.655436 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90\": container with ID starting with be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90 not found: ID does not exist" containerID="be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.655475 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90"} err="failed to get container status \"be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90\": rpc error: code = NotFound desc = could not find container \"be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90\": container with ID starting with be403368979410c634cd2360c61c5b2a9262b1601897a881c4ba21f778fc6b90 not found: ID does not exist" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.655500 4851 scope.go:117] "RemoveContainer" containerID="dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e" Feb 23 13:29:28 crc kubenswrapper[4851]: E0223 13:29:28.655856 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e\": container with ID starting with dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e not found: ID does not exist" containerID="dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.655883 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e"} err="failed to get container status \"dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e\": rpc error: code = NotFound desc = could not find container \"dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e\": container with ID starting with dee6becbfbf2f64b2174cb75ba95d0409e14ddf29d6ca3b8b907ecf2b4d26d5e not found: ID does not exist" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.655901 4851 scope.go:117] "RemoveContainer" containerID="152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f" Feb 23 13:29:28 crc kubenswrapper[4851]: E0223 13:29:28.656163 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f\": container with ID starting with 152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f not found: ID does not exist" containerID="152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.656190 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f"} err="failed to get container status \"152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f\": rpc error: code = NotFound desc = could not find container \"152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f\": container with ID starting with 152a402fbc0391e10f5157af9d1878791dc9348586e8d11a1be27368cd83410f not found: ID does not exist" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.656211 4851 scope.go:117] "RemoveContainer" containerID="d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87" Feb 23 13:29:28 crc kubenswrapper[4851]: E0223 13:29:28.656543 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87\": container with ID starting with d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87 not found: ID does not exist" containerID="d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.656570 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87"} err="failed to get container status \"d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87\": rpc error: code = NotFound desc = could not find container \"d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87\": container with ID starting with d99036e928b8048b61042e055f9dc605fe93d122773dbeeef342703baf46ad87 not found: ID does not exist" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.732873 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-log-httpd\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.732970 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddskv\" (UniqueName: \"kubernetes.io/projected/b24d87d8-356c-426a-a2a1-58e345df5d9a-kube-api-access-ddskv\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.733002 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-scripts\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.733020 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.733092 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-run-httpd\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.733115 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-config-data\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.733147 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.733167 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.733377 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-log-httpd\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.733519 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-run-httpd\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.738321 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.738564 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-scripts\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.741030 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.741360 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-config-data\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.743213 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.749966 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddskv\" (UniqueName: \"kubernetes.io/projected/b24d87d8-356c-426a-a2a1-58e345df5d9a-kube-api-access-ddskv\") pod \"ceilometer-0\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " pod="openstack/ceilometer-0" Feb 23 13:29:28 crc kubenswrapper[4851]: I0223 13:29:28.937312 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:29:29 crc kubenswrapper[4851]: I0223 13:29:29.078465 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:29 crc kubenswrapper[4851]: I0223 13:29:29.407411 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:29 crc kubenswrapper[4851]: I0223 13:29:29.493752 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerStarted","Data":"ce7beaffe6b005060740749303a13c8ddc2e3f6fec76512f6b978f99bd469ad2"} Feb 23 13:29:29 crc kubenswrapper[4851]: I0223 13:29:29.979083 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d" path="/var/lib/kubelet/pods/8c8627db-a6c6-4cff-ac2c-dee51bd5eb6d/volumes" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.302687 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.466664 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6dk4\" (UniqueName: \"kubernetes.io/projected/b14a9f03-61a4-44f3-b79d-771701024159-kube-api-access-t6dk4\") pod \"b14a9f03-61a4-44f3-b79d-771701024159\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.467003 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-config-data\") pod \"b14a9f03-61a4-44f3-b79d-771701024159\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.467177 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-combined-ca-bundle\") pod \"b14a9f03-61a4-44f3-b79d-771701024159\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.467324 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b14a9f03-61a4-44f3-b79d-771701024159-logs\") pod \"b14a9f03-61a4-44f3-b79d-771701024159\" (UID: \"b14a9f03-61a4-44f3-b79d-771701024159\") " Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.468065 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14a9f03-61a4-44f3-b79d-771701024159-logs" (OuterVolumeSpecName: "logs") pod "b14a9f03-61a4-44f3-b79d-771701024159" (UID: "b14a9f03-61a4-44f3-b79d-771701024159"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.472324 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14a9f03-61a4-44f3-b79d-771701024159-kube-api-access-t6dk4" (OuterVolumeSpecName: "kube-api-access-t6dk4") pod "b14a9f03-61a4-44f3-b79d-771701024159" (UID: "b14a9f03-61a4-44f3-b79d-771701024159"). InnerVolumeSpecName "kube-api-access-t6dk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.502012 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b14a9f03-61a4-44f3-b79d-771701024159" (UID: "b14a9f03-61a4-44f3-b79d-771701024159"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.507834 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerStarted","Data":"b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7"} Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.510571 4851 generic.go:334] "Generic (PLEG): container finished" podID="b14a9f03-61a4-44f3-b79d-771701024159" containerID="d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350" exitCode=0 Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.510599 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b14a9f03-61a4-44f3-b79d-771701024159","Type":"ContainerDied","Data":"d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350"} Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.510614 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b14a9f03-61a4-44f3-b79d-771701024159","Type":"ContainerDied","Data":"c595c57abd65a04eecda31b3326390577a38b9a09b7d367cb6db9ac4a415ec89"} Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.510630 4851 scope.go:117] "RemoveContainer" containerID="d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.510744 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.510924 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-config-data" (OuterVolumeSpecName: "config-data") pod "b14a9f03-61a4-44f3-b79d-771701024159" (UID: "b14a9f03-61a4-44f3-b79d-771701024159"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.537724 4851 scope.go:117] "RemoveContainer" containerID="894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.572514 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6dk4\" (UniqueName: \"kubernetes.io/projected/b14a9f03-61a4-44f3-b79d-771701024159-kube-api-access-t6dk4\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.572548 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.572560 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14a9f03-61a4-44f3-b79d-771701024159-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.572570 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b14a9f03-61a4-44f3-b79d-771701024159-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.573847 4851 scope.go:117] "RemoveContainer" containerID="d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350" Feb 23 13:29:30 crc kubenswrapper[4851]: E0223 13:29:30.574574 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350\": container with ID starting with d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350 not found: ID does not exist" containerID="d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.574621 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350"} err="failed to get container status \"d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350\": rpc error: code = NotFound desc = could not find container \"d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350\": container with ID starting with d5be22c841e0e5a82391d62fdb9b21cbe2097c2c21ae9fd350cc927ad4b84350 not found: ID does not exist" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.574649 4851 scope.go:117] "RemoveContainer" containerID="894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698" Feb 23 13:29:30 crc kubenswrapper[4851]: E0223 13:29:30.575947 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698\": container with ID starting with 894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698 not found: ID does not exist" containerID="894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.575983 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698"} err="failed to get container status \"894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698\": rpc error: code = NotFound desc = could not find container \"894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698\": container with ID starting with 894746902c04b15ca7cf2cd3923c3cb22ec68d011749e06d333f222cafa35698 not found: ID does not exist" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.781651 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.806104 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.806157 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.811681 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.890318 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.908826 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.917408 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:30 crc kubenswrapper[4851]: E0223 13:29:30.917942 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-api" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.917968 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-api" Feb 23 13:29:30 crc kubenswrapper[4851]: E0223 13:29:30.917983 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-log" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.917992 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-log" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.918217 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-log" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.918283 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14a9f03-61a4-44f3-b79d-771701024159" containerName="nova-api-api" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.919812 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.926373 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.946013 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.946237 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.946377 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.977903 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4323b2fb-917b-41d4-92d3-f19b3132aed3-logs\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.977961 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.977982 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.978050 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9sx8\" (UniqueName: \"kubernetes.io/projected/4323b2fb-917b-41d4-92d3-f19b3132aed3-kube-api-access-x9sx8\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.978071 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-public-tls-certs\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:30 crc kubenswrapper[4851]: I0223 13:29:30.978106 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-config-data\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.080129 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9sx8\" (UniqueName: \"kubernetes.io/projected/4323b2fb-917b-41d4-92d3-f19b3132aed3-kube-api-access-x9sx8\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.080516 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-public-tls-certs\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.080558 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-config-data\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.080653 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4323b2fb-917b-41d4-92d3-f19b3132aed3-logs\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.080691 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.080712 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.081698 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4323b2fb-917b-41d4-92d3-f19b3132aed3-logs\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.089181 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.089307 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-config-data\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.089493 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-public-tls-certs\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.091504 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.099929 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9sx8\" (UniqueName: \"kubernetes.io/projected/4323b2fb-917b-41d4-92d3-f19b3132aed3-kube-api-access-x9sx8\") pod \"nova-api-0\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.260533 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.525278 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerStarted","Data":"fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080"} Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.544542 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.728940 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:31 crc kubenswrapper[4851]: W0223 13:29:31.737367 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4323b2fb_917b_41d4_92d3_f19b3132aed3.slice/crio-7afb678142d1f35128ef78a62d3f86e220b0dab3daba69b1001316ec84425491 WatchSource:0}: Error finding container 7afb678142d1f35128ef78a62d3f86e220b0dab3daba69b1001316ec84425491: Status 404 returned error can't find the container with id 7afb678142d1f35128ef78a62d3f86e220b0dab3daba69b1001316ec84425491 Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.801570 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sfpzs"] Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.802859 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.804196 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpm98\" (UniqueName: \"kubernetes.io/projected/9f8848be-a603-4de9-9834-05c24e156662-kube-api-access-bpm98\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.804479 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-scripts\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.804524 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.804615 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-config-data\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.808032 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.808465 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.832648 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.833006 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.833064 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sfpzs"] Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.908305 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-config-data\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.909449 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpm98\" (UniqueName: \"kubernetes.io/projected/9f8848be-a603-4de9-9834-05c24e156662-kube-api-access-bpm98\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.909622 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-scripts\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.909713 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.915037 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-scripts\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.916265 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-config-data\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.916877 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:31 crc kubenswrapper[4851]: I0223 13:29:31.927868 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpm98\" (UniqueName: \"kubernetes.io/projected/9f8848be-a603-4de9-9834-05c24e156662-kube-api-access-bpm98\") pod \"nova-cell1-cell-mapping-sfpzs\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:32 crc kubenswrapper[4851]: I0223 13:29:32.015259 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14a9f03-61a4-44f3-b79d-771701024159" path="/var/lib/kubelet/pods/b14a9f03-61a4-44f3-b79d-771701024159/volumes" Feb 23 13:29:32 crc kubenswrapper[4851]: I0223 13:29:32.199729 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:32 crc kubenswrapper[4851]: I0223 13:29:32.539544 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4323b2fb-917b-41d4-92d3-f19b3132aed3","Type":"ContainerStarted","Data":"941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd"} Feb 23 13:29:32 crc kubenswrapper[4851]: I0223 13:29:32.539917 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4323b2fb-917b-41d4-92d3-f19b3132aed3","Type":"ContainerStarted","Data":"4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f"} Feb 23 13:29:32 crc kubenswrapper[4851]: I0223 13:29:32.539940 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4323b2fb-917b-41d4-92d3-f19b3132aed3","Type":"ContainerStarted","Data":"7afb678142d1f35128ef78a62d3f86e220b0dab3daba69b1001316ec84425491"} Feb 23 13:29:32 crc kubenswrapper[4851]: I0223 13:29:32.547810 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerStarted","Data":"18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56"} Feb 23 13:29:32 crc kubenswrapper[4851]: I0223 13:29:32.578598 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.578579061 podStartE2EDuration="2.578579061s" podCreationTimestamp="2026-02-23 13:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:32.566963995 +0000 UTC m=+1327.248667683" watchObservedRunningTime="2026-02-23 13:29:32.578579061 +0000 UTC m=+1327.260282739" Feb 23 13:29:32 crc kubenswrapper[4851]: I0223 13:29:32.653510 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sfpzs"] Feb 23 13:29:33 crc kubenswrapper[4851]: I0223 13:29:33.559614 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sfpzs" event={"ID":"9f8848be-a603-4de9-9834-05c24e156662","Type":"ContainerStarted","Data":"5cfba63c563b7881e6751086ad316e7e9b34c43f7a2c138f864cbd384e383971"} Feb 23 13:29:33 crc kubenswrapper[4851]: I0223 13:29:33.559953 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sfpzs" event={"ID":"9f8848be-a603-4de9-9834-05c24e156662","Type":"ContainerStarted","Data":"02c3f507437ca4837dc732f8be8a0a30184e02cab29664872077a57a36165cd6"} Feb 23 13:29:33 crc kubenswrapper[4851]: I0223 13:29:33.583121 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sfpzs" podStartSLOduration=2.583094369 podStartE2EDuration="2.583094369s" podCreationTimestamp="2026-02-23 13:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:33.574187929 +0000 UTC m=+1328.255891607" watchObservedRunningTime="2026-02-23 13:29:33.583094369 +0000 UTC m=+1328.264798077" Feb 23 13:29:33 crc kubenswrapper[4851]: I0223 13:29:33.919791 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:29:33 crc kubenswrapper[4851]: I0223 13:29:33.989479 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5cm4d"] Feb 23 13:29:33 crc kubenswrapper[4851]: I0223 13:29:33.989732 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" podUID="de80e2ee-94b2-41df-ada8-0af01ef7575b" containerName="dnsmasq-dns" containerID="cri-o://585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46" gracePeriod=10 Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.499461 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.571755 4851 generic.go:334] "Generic (PLEG): container finished" podID="de80e2ee-94b2-41df-ada8-0af01ef7575b" containerID="585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46" exitCode=0 Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.572028 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.571938 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" event={"ID":"de80e2ee-94b2-41df-ada8-0af01ef7575b","Type":"ContainerDied","Data":"585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46"} Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.572114 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-5cm4d" event={"ID":"de80e2ee-94b2-41df-ada8-0af01ef7575b","Type":"ContainerDied","Data":"f92a4d12446a80614a0d8f7a0a3b85d687d5410ac9d6a27f944258e51a99779d"} Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.572135 4851 scope.go:117] "RemoveContainer" containerID="585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.577865 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="ceilometer-central-agent" containerID="cri-o://b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7" gracePeriod=30 Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.578233 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerStarted","Data":"2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a"} Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.578278 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.578278 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="proxy-httpd" containerID="cri-o://2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a" gracePeriod=30 Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.578294 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="sg-core" containerID="cri-o://18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56" gracePeriod=30 Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.578381 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="ceilometer-notification-agent" containerID="cri-o://fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080" gracePeriod=30 Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.615058 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.886343541 podStartE2EDuration="6.615009187s" podCreationTimestamp="2026-02-23 13:29:28 +0000 UTC" firstStartedPulling="2026-02-23 13:29:29.418235971 +0000 UTC m=+1324.099939649" lastFinishedPulling="2026-02-23 13:29:34.146901617 +0000 UTC m=+1328.828605295" observedRunningTime="2026-02-23 13:29:34.604467681 +0000 UTC m=+1329.286171379" watchObservedRunningTime="2026-02-23 13:29:34.615009187 +0000 UTC m=+1329.296712865" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.652551 4851 scope.go:117] "RemoveContainer" containerID="456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.673959 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-sb\") pod \"de80e2ee-94b2-41df-ada8-0af01ef7575b\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.674026 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-swift-storage-0\") pod \"de80e2ee-94b2-41df-ada8-0af01ef7575b\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.674067 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-nb\") pod \"de80e2ee-94b2-41df-ada8-0af01ef7575b\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.674102 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjgts\" (UniqueName: \"kubernetes.io/projected/de80e2ee-94b2-41df-ada8-0af01ef7575b-kube-api-access-gjgts\") pod \"de80e2ee-94b2-41df-ada8-0af01ef7575b\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.674187 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-config\") pod \"de80e2ee-94b2-41df-ada8-0af01ef7575b\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.674288 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-svc\") pod \"de80e2ee-94b2-41df-ada8-0af01ef7575b\" (UID: \"de80e2ee-94b2-41df-ada8-0af01ef7575b\") " Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.690281 4851 scope.go:117] "RemoveContainer" containerID="585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46" Feb 23 13:29:34 crc kubenswrapper[4851]: E0223 13:29:34.699562 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46\": container with ID starting with 585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46 not found: ID does not exist" containerID="585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.699613 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46"} err="failed to get container status \"585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46\": rpc error: code = NotFound desc = could not find container \"585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46\": container with ID starting with 585aaa24acb324b6ec79f95a86214c53e4f9c2517d31f5036579d10f2a1aba46 not found: ID does not exist" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.699642 4851 scope.go:117] "RemoveContainer" containerID="456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d" Feb 23 13:29:34 crc kubenswrapper[4851]: E0223 13:29:34.700805 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d\": container with ID starting with 456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d not found: ID does not exist" containerID="456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.700823 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d"} err="failed to get container status \"456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d\": rpc error: code = NotFound desc = could not find container \"456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d\": container with ID starting with 456ae9dfb74d4103cc98adb114335adf9c78be2d1271e1d304090ae9f9af141d not found: ID does not exist" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.721725 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de80e2ee-94b2-41df-ada8-0af01ef7575b-kube-api-access-gjgts" (OuterVolumeSpecName: "kube-api-access-gjgts") pod "de80e2ee-94b2-41df-ada8-0af01ef7575b" (UID: "de80e2ee-94b2-41df-ada8-0af01ef7575b"). InnerVolumeSpecName "kube-api-access-gjgts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.781646 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjgts\" (UniqueName: \"kubernetes.io/projected/de80e2ee-94b2-41df-ada8-0af01ef7575b-kube-api-access-gjgts\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.815955 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-config" (OuterVolumeSpecName: "config") pod "de80e2ee-94b2-41df-ada8-0af01ef7575b" (UID: "de80e2ee-94b2-41df-ada8-0af01ef7575b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.817850 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de80e2ee-94b2-41df-ada8-0af01ef7575b" (UID: "de80e2ee-94b2-41df-ada8-0af01ef7575b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.828808 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de80e2ee-94b2-41df-ada8-0af01ef7575b" (UID: "de80e2ee-94b2-41df-ada8-0af01ef7575b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.836750 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "de80e2ee-94b2-41df-ada8-0af01ef7575b" (UID: "de80e2ee-94b2-41df-ada8-0af01ef7575b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.843691 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de80e2ee-94b2-41df-ada8-0af01ef7575b" (UID: "de80e2ee-94b2-41df-ada8-0af01ef7575b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.883851 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.883879 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.883891 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.883901 4851 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.883913 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de80e2ee-94b2-41df-ada8-0af01ef7575b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.912398 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5cm4d"] Feb 23 13:29:34 crc kubenswrapper[4851]: I0223 13:29:34.919775 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-5cm4d"] Feb 23 13:29:35 crc kubenswrapper[4851]: I0223 13:29:35.038376 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 13:29:35 crc kubenswrapper[4851]: I0223 13:29:35.588655 4851 generic.go:334] "Generic (PLEG): container finished" podID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerID="2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a" exitCode=0 Feb 23 13:29:35 crc kubenswrapper[4851]: I0223 13:29:35.588907 4851 generic.go:334] "Generic (PLEG): container finished" podID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerID="18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56" exitCode=2 Feb 23 13:29:35 crc kubenswrapper[4851]: I0223 13:29:35.588917 4851 generic.go:334] "Generic (PLEG): container finished" podID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerID="fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080" exitCode=0 Feb 23 13:29:35 crc kubenswrapper[4851]: I0223 13:29:35.588816 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerDied","Data":"2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a"} Feb 23 13:29:35 crc kubenswrapper[4851]: I0223 13:29:35.588985 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerDied","Data":"18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56"} Feb 23 13:29:35 crc kubenswrapper[4851]: I0223 13:29:35.588998 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerDied","Data":"fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080"} Feb 23 13:29:35 crc kubenswrapper[4851]: I0223 13:29:35.980542 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de80e2ee-94b2-41df-ada8-0af01ef7575b" path="/var/lib/kubelet/pods/de80e2ee-94b2-41df-ada8-0af01ef7575b/volumes" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.369136 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.381157 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-combined-ca-bundle\") pod \"b24d87d8-356c-426a-a2a1-58e345df5d9a\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.381222 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-run-httpd\") pod \"b24d87d8-356c-426a-a2a1-58e345df5d9a\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.381256 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-config-data\") pod \"b24d87d8-356c-426a-a2a1-58e345df5d9a\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.381299 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-ceilometer-tls-certs\") pod \"b24d87d8-356c-426a-a2a1-58e345df5d9a\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.381363 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddskv\" (UniqueName: \"kubernetes.io/projected/b24d87d8-356c-426a-a2a1-58e345df5d9a-kube-api-access-ddskv\") pod \"b24d87d8-356c-426a-a2a1-58e345df5d9a\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.381437 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-log-httpd\") pod \"b24d87d8-356c-426a-a2a1-58e345df5d9a\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.381512 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-sg-core-conf-yaml\") pod \"b24d87d8-356c-426a-a2a1-58e345df5d9a\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.381593 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-scripts\") pod \"b24d87d8-356c-426a-a2a1-58e345df5d9a\" (UID: \"b24d87d8-356c-426a-a2a1-58e345df5d9a\") " Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.383279 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b24d87d8-356c-426a-a2a1-58e345df5d9a" (UID: "b24d87d8-356c-426a-a2a1-58e345df5d9a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.383429 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b24d87d8-356c-426a-a2a1-58e345df5d9a" (UID: "b24d87d8-356c-426a-a2a1-58e345df5d9a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.388444 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-scripts" (OuterVolumeSpecName: "scripts") pod "b24d87d8-356c-426a-a2a1-58e345df5d9a" (UID: "b24d87d8-356c-426a-a2a1-58e345df5d9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.388606 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24d87d8-356c-426a-a2a1-58e345df5d9a-kube-api-access-ddskv" (OuterVolumeSpecName: "kube-api-access-ddskv") pod "b24d87d8-356c-426a-a2a1-58e345df5d9a" (UID: "b24d87d8-356c-426a-a2a1-58e345df5d9a"). InnerVolumeSpecName "kube-api-access-ddskv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.444110 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b24d87d8-356c-426a-a2a1-58e345df5d9a" (UID: "b24d87d8-356c-426a-a2a1-58e345df5d9a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.446448 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b24d87d8-356c-426a-a2a1-58e345df5d9a" (UID: "b24d87d8-356c-426a-a2a1-58e345df5d9a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.483669 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.483705 4851 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.483717 4851 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.483731 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddskv\" (UniqueName: \"kubernetes.io/projected/b24d87d8-356c-426a-a2a1-58e345df5d9a-kube-api-access-ddskv\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.483740 4851 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b24d87d8-356c-426a-a2a1-58e345df5d9a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.483748 4851 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.486255 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b24d87d8-356c-426a-a2a1-58e345df5d9a" (UID: "b24d87d8-356c-426a-a2a1-58e345df5d9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.511499 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-config-data" (OuterVolumeSpecName: "config-data") pod "b24d87d8-356c-426a-a2a1-58e345df5d9a" (UID: "b24d87d8-356c-426a-a2a1-58e345df5d9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.591622 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.591877 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24d87d8-356c-426a-a2a1-58e345df5d9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.608652 4851 generic.go:334] "Generic (PLEG): container finished" podID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerID="b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7" exitCode=0 Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.608695 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerDied","Data":"b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7"} Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.608715 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.608743 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b24d87d8-356c-426a-a2a1-58e345df5d9a","Type":"ContainerDied","Data":"ce7beaffe6b005060740749303a13c8ddc2e3f6fec76512f6b978f99bd469ad2"} Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.608763 4851 scope.go:117] "RemoveContainer" containerID="2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.638124 4851 scope.go:117] "RemoveContainer" containerID="18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.646474 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.661436 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.672998 4851 scope.go:117] "RemoveContainer" containerID="fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.673241 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.673795 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de80e2ee-94b2-41df-ada8-0af01ef7575b" containerName="init" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.673927 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="de80e2ee-94b2-41df-ada8-0af01ef7575b" containerName="init" Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.674065 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="ceilometer-central-agent" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.674159 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="ceilometer-central-agent" Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.674247 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="sg-core" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.674307 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="sg-core" Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.674412 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="ceilometer-notification-agent" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.674556 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="ceilometer-notification-agent" Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.674632 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="proxy-httpd" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.674763 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="proxy-httpd" Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.674877 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de80e2ee-94b2-41df-ada8-0af01ef7575b" containerName="dnsmasq-dns" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.674950 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="de80e2ee-94b2-41df-ada8-0af01ef7575b" containerName="dnsmasq-dns" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.675226 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="de80e2ee-94b2-41df-ada8-0af01ef7575b" containerName="dnsmasq-dns" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.675292 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="proxy-httpd" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.675368 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="ceilometer-central-agent" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.675433 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="sg-core" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.675494 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" containerName="ceilometer-notification-agent" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.677465 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.679611 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.681476 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.681661 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.694535 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-config-data\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.694568 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0421c96-8b66-48fd-9778-da16d4eb8ef0-run-httpd\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.694589 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.694734 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0421c96-8b66-48fd-9778-da16d4eb8ef0-log-httpd\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.694810 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.697015 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j85s\" (UniqueName: \"kubernetes.io/projected/b0421c96-8b66-48fd-9778-da16d4eb8ef0-kube-api-access-2j85s\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.697115 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-scripts\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.697153 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.701483 4851 scope.go:117] "RemoveContainer" containerID="b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.704236 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.721579 4851 scope.go:117] "RemoveContainer" containerID="2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a" Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.722482 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a\": container with ID starting with 2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a not found: ID does not exist" containerID="2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.722518 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a"} err="failed to get container status \"2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a\": rpc error: code = NotFound desc = could not find container \"2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a\": container with ID starting with 2a43c01051d6f5e5716b63025190ceb18291283d76a6f79709908c9234cba63a not found: ID does not exist" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.722550 4851 scope.go:117] "RemoveContainer" containerID="18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56" Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.722950 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56\": container with ID starting with 18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56 not found: ID does not exist" containerID="18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.722976 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56"} err="failed to get container status \"18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56\": rpc error: code = NotFound desc = could not find container \"18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56\": container with ID starting with 18ebfd4cf65e7d5097f2d0b5546e838402575ed9ae6179a92a2be51d9813fa56 not found: ID does not exist" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.722990 4851 scope.go:117] "RemoveContainer" containerID="fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080" Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.723689 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080\": container with ID starting with fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080 not found: ID does not exist" containerID="fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.723773 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080"} err="failed to get container status \"fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080\": rpc error: code = NotFound desc = could not find container \"fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080\": container with ID starting with fd1c71b1f9531007acba0128b7c4d0b8df154bd2678115646c4bdda52ae46080 not found: ID does not exist" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.723808 4851 scope.go:117] "RemoveContainer" containerID="b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7" Feb 23 13:29:37 crc kubenswrapper[4851]: E0223 13:29:37.724180 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7\": container with ID starting with b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7 not found: ID does not exist" containerID="b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.724227 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7"} err="failed to get container status \"b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7\": rpc error: code = NotFound desc = could not find container \"b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7\": container with ID starting with b67a9e9507c06334ab9aa2b5ba8386bef3e5a6612c860265d4754fe82e6e3cf7 not found: ID does not exist" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.798668 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-config-data\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.798781 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0421c96-8b66-48fd-9778-da16d4eb8ef0-run-httpd\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.798817 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.798942 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0421c96-8b66-48fd-9778-da16d4eb8ef0-log-httpd\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.798987 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.799137 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j85s\" (UniqueName: \"kubernetes.io/projected/b0421c96-8b66-48fd-9778-da16d4eb8ef0-kube-api-access-2j85s\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.799291 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-scripts\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.799347 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.799578 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0421c96-8b66-48fd-9778-da16d4eb8ef0-log-httpd\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.799615 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0421c96-8b66-48fd-9778-da16d4eb8ef0-run-httpd\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.802226 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-config-data\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.802566 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-scripts\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.802780 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.804453 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.813540 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0421c96-8b66-48fd-9778-da16d4eb8ef0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.817249 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j85s\" (UniqueName: \"kubernetes.io/projected/b0421c96-8b66-48fd-9778-da16d4eb8ef0-kube-api-access-2j85s\") pod \"ceilometer-0\" (UID: \"b0421c96-8b66-48fd-9778-da16d4eb8ef0\") " pod="openstack/ceilometer-0" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.982158 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24d87d8-356c-426a-a2a1-58e345df5d9a" path="/var/lib/kubelet/pods/b24d87d8-356c-426a-a2a1-58e345df5d9a/volumes" Feb 23 13:29:37 crc kubenswrapper[4851]: I0223 13:29:37.997185 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 13:29:38 crc kubenswrapper[4851]: I0223 13:29:38.436941 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 13:29:38 crc kubenswrapper[4851]: W0223 13:29:38.441130 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0421c96_8b66_48fd_9778_da16d4eb8ef0.slice/crio-474209e2743a18a284ccb4aac0318430abfdd8296eb31d9df9daf67a3d0a520f WatchSource:0}: Error finding container 474209e2743a18a284ccb4aac0318430abfdd8296eb31d9df9daf67a3d0a520f: Status 404 returned error can't find the container with id 474209e2743a18a284ccb4aac0318430abfdd8296eb31d9df9daf67a3d0a520f Feb 23 13:29:38 crc kubenswrapper[4851]: I0223 13:29:38.619016 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0421c96-8b66-48fd-9778-da16d4eb8ef0","Type":"ContainerStarted","Data":"474209e2743a18a284ccb4aac0318430abfdd8296eb31d9df9daf67a3d0a520f"} Feb 23 13:29:38 crc kubenswrapper[4851]: I0223 13:29:38.620972 4851 generic.go:334] "Generic (PLEG): container finished" podID="9f8848be-a603-4de9-9834-05c24e156662" containerID="5cfba63c563b7881e6751086ad316e7e9b34c43f7a2c138f864cbd384e383971" exitCode=0 Feb 23 13:29:38 crc kubenswrapper[4851]: I0223 13:29:38.621006 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sfpzs" event={"ID":"9f8848be-a603-4de9-9834-05c24e156662","Type":"ContainerDied","Data":"5cfba63c563b7881e6751086ad316e7e9b34c43f7a2c138f864cbd384e383971"} Feb 23 13:29:39 crc kubenswrapper[4851]: I0223 13:29:39.634385 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0421c96-8b66-48fd-9778-da16d4eb8ef0","Type":"ContainerStarted","Data":"915322fb155f46578519b078fd7ed4c78e3c849b4a556fa0f4d44011fb350f4a"} Feb 23 13:29:39 crc kubenswrapper[4851]: I0223 13:29:39.972924 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.035792 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-scripts\") pod \"9f8848be-a603-4de9-9834-05c24e156662\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.035868 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-config-data\") pod \"9f8848be-a603-4de9-9834-05c24e156662\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.035900 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpm98\" (UniqueName: \"kubernetes.io/projected/9f8848be-a603-4de9-9834-05c24e156662-kube-api-access-bpm98\") pod \"9f8848be-a603-4de9-9834-05c24e156662\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.035962 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-combined-ca-bundle\") pod \"9f8848be-a603-4de9-9834-05c24e156662\" (UID: \"9f8848be-a603-4de9-9834-05c24e156662\") " Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.042896 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-scripts" (OuterVolumeSpecName: "scripts") pod "9f8848be-a603-4de9-9834-05c24e156662" (UID: "9f8848be-a603-4de9-9834-05c24e156662"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.043462 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8848be-a603-4de9-9834-05c24e156662-kube-api-access-bpm98" (OuterVolumeSpecName: "kube-api-access-bpm98") pod "9f8848be-a603-4de9-9834-05c24e156662" (UID: "9f8848be-a603-4de9-9834-05c24e156662"). InnerVolumeSpecName "kube-api-access-bpm98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.063197 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-config-data" (OuterVolumeSpecName: "config-data") pod "9f8848be-a603-4de9-9834-05c24e156662" (UID: "9f8848be-a603-4de9-9834-05c24e156662"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.063629 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f8848be-a603-4de9-9834-05c24e156662" (UID: "9f8848be-a603-4de9-9834-05c24e156662"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.138387 4851 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.138646 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.138659 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpm98\" (UniqueName: \"kubernetes.io/projected/9f8848be-a603-4de9-9834-05c24e156662-kube-api-access-bpm98\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.138672 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f8848be-a603-4de9-9834-05c24e156662-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.651623 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0421c96-8b66-48fd-9778-da16d4eb8ef0","Type":"ContainerStarted","Data":"8f61b4f95acd70da301db44efb727d43cf4b94213056767f30cc0448225ecb0e"} Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.652452 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0421c96-8b66-48fd-9778-da16d4eb8ef0","Type":"ContainerStarted","Data":"ab1429b71ac08943ee437add1da29ea2df75119f1428ceb4d4f13c2ecfa5129c"} Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.658316 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sfpzs" event={"ID":"9f8848be-a603-4de9-9834-05c24e156662","Type":"ContainerDied","Data":"02c3f507437ca4837dc732f8be8a0a30184e02cab29664872077a57a36165cd6"} Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.658378 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c3f507437ca4837dc732f8be8a0a30184e02cab29664872077a57a36165cd6" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.658599 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sfpzs" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.824362 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.824944 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.825208 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerName="nova-api-log" containerID="cri-o://4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f" gracePeriod=30 Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.825633 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerName="nova-api-api" containerID="cri-o://941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd" gracePeriod=30 Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.841161 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.846832 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0fc2c4d6-01da-4560-a303-af51b362022b" containerName="nova-scheduler-scheduler" containerID="cri-o://6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03" gracePeriod=30 Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.868146 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.873405 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:40 crc kubenswrapper[4851]: I0223 13:29:40.887582 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.467567 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.565763 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9sx8\" (UniqueName: \"kubernetes.io/projected/4323b2fb-917b-41d4-92d3-f19b3132aed3-kube-api-access-x9sx8\") pod \"4323b2fb-917b-41d4-92d3-f19b3132aed3\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.565803 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-internal-tls-certs\") pod \"4323b2fb-917b-41d4-92d3-f19b3132aed3\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.565839 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-combined-ca-bundle\") pod \"4323b2fb-917b-41d4-92d3-f19b3132aed3\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.566044 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4323b2fb-917b-41d4-92d3-f19b3132aed3-logs\") pod \"4323b2fb-917b-41d4-92d3-f19b3132aed3\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.566097 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-config-data\") pod \"4323b2fb-917b-41d4-92d3-f19b3132aed3\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.566130 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-public-tls-certs\") pod \"4323b2fb-917b-41d4-92d3-f19b3132aed3\" (UID: \"4323b2fb-917b-41d4-92d3-f19b3132aed3\") " Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.566463 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4323b2fb-917b-41d4-92d3-f19b3132aed3-logs" (OuterVolumeSpecName: "logs") pod "4323b2fb-917b-41d4-92d3-f19b3132aed3" (UID: "4323b2fb-917b-41d4-92d3-f19b3132aed3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.566742 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4323b2fb-917b-41d4-92d3-f19b3132aed3-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.575254 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4323b2fb-917b-41d4-92d3-f19b3132aed3-kube-api-access-x9sx8" (OuterVolumeSpecName: "kube-api-access-x9sx8") pod "4323b2fb-917b-41d4-92d3-f19b3132aed3" (UID: "4323b2fb-917b-41d4-92d3-f19b3132aed3"). InnerVolumeSpecName "kube-api-access-x9sx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.597823 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4323b2fb-917b-41d4-92d3-f19b3132aed3" (UID: "4323b2fb-917b-41d4-92d3-f19b3132aed3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:41 crc kubenswrapper[4851]: E0223 13:29:41.598110 4851 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 13:29:41 crc kubenswrapper[4851]: E0223 13:29:41.603153 4851 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 13:29:41 crc kubenswrapper[4851]: E0223 13:29:41.604731 4851 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 13:29:41 crc kubenswrapper[4851]: E0223 13:29:41.604766 4851 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0fc2c4d6-01da-4560-a303-af51b362022b" containerName="nova-scheduler-scheduler" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.606622 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-config-data" (OuterVolumeSpecName: "config-data") pod "4323b2fb-917b-41d4-92d3-f19b3132aed3" (UID: "4323b2fb-917b-41d4-92d3-f19b3132aed3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.624497 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4323b2fb-917b-41d4-92d3-f19b3132aed3" (UID: "4323b2fb-917b-41d4-92d3-f19b3132aed3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.644470 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4323b2fb-917b-41d4-92d3-f19b3132aed3" (UID: "4323b2fb-917b-41d4-92d3-f19b3132aed3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668273 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668302 4851 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668313 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9sx8\" (UniqueName: \"kubernetes.io/projected/4323b2fb-917b-41d4-92d3-f19b3132aed3-kube-api-access-x9sx8\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668323 4851 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668345 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4323b2fb-917b-41d4-92d3-f19b3132aed3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668722 4851 generic.go:334] "Generic (PLEG): container finished" podID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerID="941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd" exitCode=0 Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668754 4851 generic.go:334] "Generic (PLEG): container finished" podID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerID="4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f" exitCode=143 Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668801 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668862 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4323b2fb-917b-41d4-92d3-f19b3132aed3","Type":"ContainerDied","Data":"941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd"} Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668888 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4323b2fb-917b-41d4-92d3-f19b3132aed3","Type":"ContainerDied","Data":"4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f"} Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668902 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4323b2fb-917b-41d4-92d3-f19b3132aed3","Type":"ContainerDied","Data":"7afb678142d1f35128ef78a62d3f86e220b0dab3daba69b1001316ec84425491"} Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.668916 4851 scope.go:117] "RemoveContainer" containerID="941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.675404 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.742448 4851 scope.go:117] "RemoveContainer" containerID="4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.748403 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.763868 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.773478 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:41 crc kubenswrapper[4851]: E0223 13:29:41.774101 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerName="nova-api-api" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.774137 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerName="nova-api-api" Feb 23 13:29:41 crc kubenswrapper[4851]: E0223 13:29:41.774159 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f8848be-a603-4de9-9834-05c24e156662" containerName="nova-manage" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.774166 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f8848be-a603-4de9-9834-05c24e156662" containerName="nova-manage" Feb 23 13:29:41 crc kubenswrapper[4851]: E0223 13:29:41.774184 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerName="nova-api-log" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.774191 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerName="nova-api-log" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.774394 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f8848be-a603-4de9-9834-05c24e156662" containerName="nova-manage" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.774545 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerName="nova-api-api" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.774568 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="4323b2fb-917b-41d4-92d3-f19b3132aed3" containerName="nova-api-log" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.778675 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.781810 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.788579 4851 scope.go:117] "RemoveContainer" containerID="941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.788823 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.789018 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.789098 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 13:29:41 crc kubenswrapper[4851]: E0223 13:29:41.790499 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd\": container with ID starting with 941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd not found: ID does not exist" containerID="941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.790553 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd"} err="failed to get container status \"941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd\": rpc error: code = NotFound desc = could not find container \"941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd\": container with ID starting with 941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd not found: ID does not exist" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.790585 4851 scope.go:117] "RemoveContainer" containerID="4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f" Feb 23 13:29:41 crc kubenswrapper[4851]: E0223 13:29:41.791045 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f\": container with ID starting with 4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f not found: ID does not exist" containerID="4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.791076 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f"} err="failed to get container status \"4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f\": rpc error: code = NotFound desc = could not find container \"4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f\": container with ID starting with 4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f not found: ID does not exist" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.791097 4851 scope.go:117] "RemoveContainer" containerID="941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.791582 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd"} err="failed to get container status \"941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd\": rpc error: code = NotFound desc = could not find container \"941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd\": container with ID starting with 941ecaebb5a6a2b09e18a737b5697c19d8b262d071651d73d0340fe352e367fd not found: ID does not exist" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.791616 4851 scope.go:117] "RemoveContainer" containerID="4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.792101 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f"} err="failed to get container status \"4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f\": rpc error: code = NotFound desc = could not find container \"4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f\": container with ID starting with 4196a6ec7adce0f818fc1d021e3adf8e3ac5ba16c900bf6c5e7d61bc04e48a8f not found: ID does not exist" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.872926 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.873250 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.873316 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.873505 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th4s8\" (UniqueName: \"kubernetes.io/projected/6c5dd0b3-902e-4156-9538-fccbb6f319ae-kube-api-access-th4s8\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.873662 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-config-data\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.873763 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c5dd0b3-902e-4156-9538-fccbb6f319ae-logs\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.924862 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.924960 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.975044 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.975111 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.975153 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th4s8\" (UniqueName: \"kubernetes.io/projected/6c5dd0b3-902e-4156-9538-fccbb6f319ae-kube-api-access-th4s8\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.975211 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-config-data\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.975261 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c5dd0b3-902e-4156-9538-fccbb6f319ae-logs\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.975287 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.975903 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c5dd0b3-902e-4156-9538-fccbb6f319ae-logs\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.980408 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-config-data\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.980695 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.981235 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4323b2fb-917b-41d4-92d3-f19b3132aed3" path="/var/lib/kubelet/pods/4323b2fb-917b-41d4-92d3-f19b3132aed3/volumes" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.982099 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:41 crc kubenswrapper[4851]: I0223 13:29:41.992218 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th4s8\" (UniqueName: \"kubernetes.io/projected/6c5dd0b3-902e-4156-9538-fccbb6f319ae-kube-api-access-th4s8\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:42 crc kubenswrapper[4851]: I0223 13:29:41.995061 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c5dd0b3-902e-4156-9538-fccbb6f319ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c5dd0b3-902e-4156-9538-fccbb6f319ae\") " pod="openstack/nova-api-0" Feb 23 13:29:42 crc kubenswrapper[4851]: I0223 13:29:42.105442 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 13:29:42 crc kubenswrapper[4851]: W0223 13:29:42.554857 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c5dd0b3_902e_4156_9538_fccbb6f319ae.slice/crio-5533345cd654020e130e161c4f95403ccec43cf5b30890a33078125444055ebe WatchSource:0}: Error finding container 5533345cd654020e130e161c4f95403ccec43cf5b30890a33078125444055ebe: Status 404 returned error can't find the container with id 5533345cd654020e130e161c4f95403ccec43cf5b30890a33078125444055ebe Feb 23 13:29:42 crc kubenswrapper[4851]: I0223 13:29:42.558950 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 13:29:42 crc kubenswrapper[4851]: I0223 13:29:42.694029 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-log" containerID="cri-o://8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59" gracePeriod=30 Feb 23 13:29:42 crc kubenswrapper[4851]: I0223 13:29:42.694321 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c5dd0b3-902e-4156-9538-fccbb6f319ae","Type":"ContainerStarted","Data":"5533345cd654020e130e161c4f95403ccec43cf5b30890a33078125444055ebe"} Feb 23 13:29:42 crc kubenswrapper[4851]: I0223 13:29:42.694718 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-metadata" containerID="cri-o://ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197" gracePeriod=30 Feb 23 13:29:43 crc kubenswrapper[4851]: I0223 13:29:43.705406 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c5dd0b3-902e-4156-9538-fccbb6f319ae","Type":"ContainerStarted","Data":"519c8599e78d1063b096ca809bee49abdecee5219c248eae23234bced39b2525"} Feb 23 13:29:43 crc kubenswrapper[4851]: I0223 13:29:43.706371 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c5dd0b3-902e-4156-9538-fccbb6f319ae","Type":"ContainerStarted","Data":"daa28d11de6b3229ec381c3a97878253401d32d2e92d5a30592fcd4fb3bd8874"} Feb 23 13:29:43 crc kubenswrapper[4851]: I0223 13:29:43.708745 4851 generic.go:334] "Generic (PLEG): container finished" podID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerID="8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59" exitCode=143 Feb 23 13:29:43 crc kubenswrapper[4851]: I0223 13:29:43.708824 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6b495ac-e87e-4db2-a35f-f9efce68ebc7","Type":"ContainerDied","Data":"8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59"} Feb 23 13:29:43 crc kubenswrapper[4851]: I0223 13:29:43.711174 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0421c96-8b66-48fd-9778-da16d4eb8ef0","Type":"ContainerStarted","Data":"94090db0078b75af5f4974f7795c14da7c42456056c3dd6b416737da621a1931"} Feb 23 13:29:43 crc kubenswrapper[4851]: I0223 13:29:43.711535 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 13:29:43 crc kubenswrapper[4851]: I0223 13:29:43.726304 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.726285988 podStartE2EDuration="2.726285988s" podCreationTimestamp="2026-02-23 13:29:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:43.722934444 +0000 UTC m=+1338.404638132" watchObservedRunningTime="2026-02-23 13:29:43.726285988 +0000 UTC m=+1338.407989666" Feb 23 13:29:43 crc kubenswrapper[4851]: I0223 13:29:43.749506 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.710709374 podStartE2EDuration="6.74948767s" podCreationTimestamp="2026-02-23 13:29:37 +0000 UTC" firstStartedPulling="2026-02-23 13:29:38.44363116 +0000 UTC m=+1333.125334838" lastFinishedPulling="2026-02-23 13:29:42.482409456 +0000 UTC m=+1337.164113134" observedRunningTime="2026-02-23 13:29:43.747897355 +0000 UTC m=+1338.429601033" watchObservedRunningTime="2026-02-23 13:29:43.74948767 +0000 UTC m=+1338.431191348" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.420451 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.533914 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-combined-ca-bundle\") pod \"0fc2c4d6-01da-4560-a303-af51b362022b\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.533972 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zhpc\" (UniqueName: \"kubernetes.io/projected/0fc2c4d6-01da-4560-a303-af51b362022b-kube-api-access-2zhpc\") pod \"0fc2c4d6-01da-4560-a303-af51b362022b\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.534018 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-config-data\") pod \"0fc2c4d6-01da-4560-a303-af51b362022b\" (UID: \"0fc2c4d6-01da-4560-a303-af51b362022b\") " Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.542525 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc2c4d6-01da-4560-a303-af51b362022b-kube-api-access-2zhpc" (OuterVolumeSpecName: "kube-api-access-2zhpc") pod "0fc2c4d6-01da-4560-a303-af51b362022b" (UID: "0fc2c4d6-01da-4560-a303-af51b362022b"). InnerVolumeSpecName "kube-api-access-2zhpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.569735 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fc2c4d6-01da-4560-a303-af51b362022b" (UID: "0fc2c4d6-01da-4560-a303-af51b362022b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.570201 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-config-data" (OuterVolumeSpecName: "config-data") pod "0fc2c4d6-01da-4560-a303-af51b362022b" (UID: "0fc2c4d6-01da-4560-a303-af51b362022b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.635946 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.635983 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fc2c4d6-01da-4560-a303-af51b362022b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.636009 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zhpc\" (UniqueName: \"kubernetes.io/projected/0fc2c4d6-01da-4560-a303-af51b362022b-kube-api-access-2zhpc\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.734635 4851 generic.go:334] "Generic (PLEG): container finished" podID="0fc2c4d6-01da-4560-a303-af51b362022b" containerID="6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03" exitCode=0 Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.734683 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0fc2c4d6-01da-4560-a303-af51b362022b","Type":"ContainerDied","Data":"6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03"} Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.734709 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0fc2c4d6-01da-4560-a303-af51b362022b","Type":"ContainerDied","Data":"b4ae6ff0662358fc3ea64f431fe33de7d5ba2e1400842c6aa552ba8f9d59febc"} Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.734684 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.734727 4851 scope.go:117] "RemoveContainer" containerID="6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.766777 4851 scope.go:117] "RemoveContainer" containerID="6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03" Feb 23 13:29:45 crc kubenswrapper[4851]: E0223 13:29:45.770522 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03\": container with ID starting with 6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03 not found: ID does not exist" containerID="6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.770566 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03"} err="failed to get container status \"6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03\": rpc error: code = NotFound desc = could not find container \"6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03\": container with ID starting with 6601904b8e0cdb5620e4b6fe04e6dc2dc600247c19cd2760d3888a25dacfde03 not found: ID does not exist" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.776150 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.797266 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.817392 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:51554->10.217.0.199:8775: read: connection reset by peer" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.817861 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": read tcp 10.217.0.2:51562->10.217.0.199:8775: read: connection reset by peer" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.823892 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:45 crc kubenswrapper[4851]: E0223 13:29:45.824588 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc2c4d6-01da-4560-a303-af51b362022b" containerName="nova-scheduler-scheduler" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.824659 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc2c4d6-01da-4560-a303-af51b362022b" containerName="nova-scheduler-scheduler" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.825633 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc2c4d6-01da-4560-a303-af51b362022b" containerName="nova-scheduler-scheduler" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.826200 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.830082 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.834604 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.842271 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcffae8a-b5fd-49bf-9316-1cc871d0568c-config-data\") pod \"nova-scheduler-0\" (UID: \"dcffae8a-b5fd-49bf-9316-1cc871d0568c\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.842311 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x8gf\" (UniqueName: \"kubernetes.io/projected/dcffae8a-b5fd-49bf-9316-1cc871d0568c-kube-api-access-7x8gf\") pod \"nova-scheduler-0\" (UID: \"dcffae8a-b5fd-49bf-9316-1cc871d0568c\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.842703 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcffae8a-b5fd-49bf-9316-1cc871d0568c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dcffae8a-b5fd-49bf-9316-1cc871d0568c\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.943634 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcffae8a-b5fd-49bf-9316-1cc871d0568c-config-data\") pod \"nova-scheduler-0\" (UID: \"dcffae8a-b5fd-49bf-9316-1cc871d0568c\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.943921 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x8gf\" (UniqueName: \"kubernetes.io/projected/dcffae8a-b5fd-49bf-9316-1cc871d0568c-kube-api-access-7x8gf\") pod \"nova-scheduler-0\" (UID: \"dcffae8a-b5fd-49bf-9316-1cc871d0568c\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.944311 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcffae8a-b5fd-49bf-9316-1cc871d0568c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dcffae8a-b5fd-49bf-9316-1cc871d0568c\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.947482 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcffae8a-b5fd-49bf-9316-1cc871d0568c-config-data\") pod \"nova-scheduler-0\" (UID: \"dcffae8a-b5fd-49bf-9316-1cc871d0568c\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.947955 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcffae8a-b5fd-49bf-9316-1cc871d0568c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dcffae8a-b5fd-49bf-9316-1cc871d0568c\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.960661 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x8gf\" (UniqueName: \"kubernetes.io/projected/dcffae8a-b5fd-49bf-9316-1cc871d0568c-kube-api-access-7x8gf\") pod \"nova-scheduler-0\" (UID: \"dcffae8a-b5fd-49bf-9316-1cc871d0568c\") " pod="openstack/nova-scheduler-0" Feb 23 13:29:45 crc kubenswrapper[4851]: I0223 13:29:45.985977 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc2c4d6-01da-4560-a303-af51b362022b" path="/var/lib/kubelet/pods/0fc2c4d6-01da-4560-a303-af51b362022b/volumes" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.212932 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.339002 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.351908 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-nova-metadata-tls-certs\") pod \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.351992 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88fh2\" (UniqueName: \"kubernetes.io/projected/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-kube-api-access-88fh2\") pod \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.352675 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-combined-ca-bundle\") pod \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.352699 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-logs\") pod \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.352728 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-config-data\") pod \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\" (UID: \"e6b495ac-e87e-4db2-a35f-f9efce68ebc7\") " Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.354345 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-logs" (OuterVolumeSpecName: "logs") pod "e6b495ac-e87e-4db2-a35f-f9efce68ebc7" (UID: "e6b495ac-e87e-4db2-a35f-f9efce68ebc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.377511 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-kube-api-access-88fh2" (OuterVolumeSpecName: "kube-api-access-88fh2") pod "e6b495ac-e87e-4db2-a35f-f9efce68ebc7" (UID: "e6b495ac-e87e-4db2-a35f-f9efce68ebc7"). InnerVolumeSpecName "kube-api-access-88fh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.384610 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-config-data" (OuterVolumeSpecName: "config-data") pod "e6b495ac-e87e-4db2-a35f-f9efce68ebc7" (UID: "e6b495ac-e87e-4db2-a35f-f9efce68ebc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.433549 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6b495ac-e87e-4db2-a35f-f9efce68ebc7" (UID: "e6b495ac-e87e-4db2-a35f-f9efce68ebc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.450879 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e6b495ac-e87e-4db2-a35f-f9efce68ebc7" (UID: "e6b495ac-e87e-4db2-a35f-f9efce68ebc7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.454711 4851 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.454748 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88fh2\" (UniqueName: \"kubernetes.io/projected/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-kube-api-access-88fh2\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.454761 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.454773 4851 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-logs\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.454785 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b495ac-e87e-4db2-a35f-f9efce68ebc7-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.690625 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 13:29:46 crc kubenswrapper[4851]: W0223 13:29:46.699822 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcffae8a_b5fd_49bf_9316_1cc871d0568c.slice/crio-805c5f9130edc52c3c180f68f554cd0024bb45db5ef7cb560681150420973fce WatchSource:0}: Error finding container 805c5f9130edc52c3c180f68f554cd0024bb45db5ef7cb560681150420973fce: Status 404 returned error can't find the container with id 805c5f9130edc52c3c180f68f554cd0024bb45db5ef7cb560681150420973fce Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.746725 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dcffae8a-b5fd-49bf-9316-1cc871d0568c","Type":"ContainerStarted","Data":"805c5f9130edc52c3c180f68f554cd0024bb45db5ef7cb560681150420973fce"} Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.748267 4851 generic.go:334] "Generic (PLEG): container finished" podID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerID="ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197" exitCode=0 Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.748342 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.748352 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6b495ac-e87e-4db2-a35f-f9efce68ebc7","Type":"ContainerDied","Data":"ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197"} Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.748399 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e6b495ac-e87e-4db2-a35f-f9efce68ebc7","Type":"ContainerDied","Data":"c9f78f6d80ae4cb555535be9284fba0ca14d2593a46477a8fc058cd37cd85b54"} Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.748416 4851 scope.go:117] "RemoveContainer" containerID="ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.776699 4851 scope.go:117] "RemoveContainer" containerID="8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.791716 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.805052 4851 scope.go:117] "RemoveContainer" containerID="ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197" Feb 23 13:29:46 crc kubenswrapper[4851]: E0223 13:29:46.805691 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197\": container with ID starting with ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197 not found: ID does not exist" containerID="ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.805735 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197"} err="failed to get container status \"ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197\": rpc error: code = NotFound desc = could not find container \"ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197\": container with ID starting with ce29fa22b2f473284dcdd56a063b46b92c1e74d3832bddff0514a58c83c7b197 not found: ID does not exist" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.805762 4851 scope.go:117] "RemoveContainer" containerID="8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59" Feb 23 13:29:46 crc kubenswrapper[4851]: E0223 13:29:46.806998 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59\": container with ID starting with 8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59 not found: ID does not exist" containerID="8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.807019 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59"} err="failed to get container status \"8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59\": rpc error: code = NotFound desc = could not find container \"8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59\": container with ID starting with 8e01a9ea2f216e0c577b70334363d6ed2013dcdbd82233fb22144c0531601a59 not found: ID does not exist" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.815122 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.848822 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:46 crc kubenswrapper[4851]: E0223 13:29:46.849872 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-log" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.849900 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-log" Feb 23 13:29:46 crc kubenswrapper[4851]: E0223 13:29:46.849982 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-metadata" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.849994 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-metadata" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.850673 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-log" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.850723 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" containerName="nova-metadata-metadata" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.853260 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.858419 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.858703 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.862887 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.964437 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f06b9e12-5e93-4ed8-80f1-733ce28508c1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.964527 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06b9e12-5e93-4ed8-80f1-733ce28508c1-logs\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.964557 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt8r\" (UniqueName: \"kubernetes.io/projected/f06b9e12-5e93-4ed8-80f1-733ce28508c1-kube-api-access-zmt8r\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.964636 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06b9e12-5e93-4ed8-80f1-733ce28508c1-config-data\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:46 crc kubenswrapper[4851]: I0223 13:29:46.964680 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06b9e12-5e93-4ed8-80f1-733ce28508c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.066653 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06b9e12-5e93-4ed8-80f1-733ce28508c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.067166 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f06b9e12-5e93-4ed8-80f1-733ce28508c1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.067256 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06b9e12-5e93-4ed8-80f1-733ce28508c1-logs\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.067352 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmt8r\" (UniqueName: \"kubernetes.io/projected/f06b9e12-5e93-4ed8-80f1-733ce28508c1-kube-api-access-zmt8r\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.067430 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06b9e12-5e93-4ed8-80f1-733ce28508c1-config-data\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.068232 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f06b9e12-5e93-4ed8-80f1-733ce28508c1-logs\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.074006 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f06b9e12-5e93-4ed8-80f1-733ce28508c1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.074786 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f06b9e12-5e93-4ed8-80f1-733ce28508c1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.080318 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f06b9e12-5e93-4ed8-80f1-733ce28508c1-config-data\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.085046 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmt8r\" (UniqueName: \"kubernetes.io/projected/f06b9e12-5e93-4ed8-80f1-733ce28508c1-kube-api-access-zmt8r\") pod \"nova-metadata-0\" (UID: \"f06b9e12-5e93-4ed8-80f1-733ce28508c1\") " pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.203764 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.642431 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 13:29:47 crc kubenswrapper[4851]: W0223 13:29:47.658501 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf06b9e12_5e93_4ed8_80f1_733ce28508c1.slice/crio-8518700ce9a0759946cb22d67c12a171eac828e1b032745ea287617c8f538852 WatchSource:0}: Error finding container 8518700ce9a0759946cb22d67c12a171eac828e1b032745ea287617c8f538852: Status 404 returned error can't find the container with id 8518700ce9a0759946cb22d67c12a171eac828e1b032745ea287617c8f538852 Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.758302 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f06b9e12-5e93-4ed8-80f1-733ce28508c1","Type":"ContainerStarted","Data":"8518700ce9a0759946cb22d67c12a171eac828e1b032745ea287617c8f538852"} Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.762416 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dcffae8a-b5fd-49bf-9316-1cc871d0568c","Type":"ContainerStarted","Data":"6b1c05774850d6f1fe2af60ef0fbab2e39a3cb35e27a041fe445d6618d530dc4"} Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.799766 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.799749208 podStartE2EDuration="2.799749208s" podCreationTimestamp="2026-02-23 13:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:47.795659784 +0000 UTC m=+1342.477363482" watchObservedRunningTime="2026-02-23 13:29:47.799749208 +0000 UTC m=+1342.481452886" Feb 23 13:29:47 crc kubenswrapper[4851]: I0223 13:29:47.980083 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b495ac-e87e-4db2-a35f-f9efce68ebc7" path="/var/lib/kubelet/pods/e6b495ac-e87e-4db2-a35f-f9efce68ebc7/volumes" Feb 23 13:29:48 crc kubenswrapper[4851]: I0223 13:29:48.773827 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f06b9e12-5e93-4ed8-80f1-733ce28508c1","Type":"ContainerStarted","Data":"6f000f20946d4f20315fd78acbe0ddff29b61ac1dbe6bcf9c3d9ceeeb6b0a90c"} Feb 23 13:29:48 crc kubenswrapper[4851]: I0223 13:29:48.774107 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f06b9e12-5e93-4ed8-80f1-733ce28508c1","Type":"ContainerStarted","Data":"0ca2f1f1f6b64e73e7d2750d22f882c8f459517c6827ccb399ad174ca1340c90"} Feb 23 13:29:48 crc kubenswrapper[4851]: I0223 13:29:48.797526 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.797504427 podStartE2EDuration="2.797504427s" podCreationTimestamp="2026-02-23 13:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:29:48.792991991 +0000 UTC m=+1343.474695689" watchObservedRunningTime="2026-02-23 13:29:48.797504427 +0000 UTC m=+1343.479208115" Feb 23 13:29:51 crc kubenswrapper[4851]: I0223 13:29:51.213153 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 13:29:52 crc kubenswrapper[4851]: I0223 13:29:52.106685 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:29:52 crc kubenswrapper[4851]: I0223 13:29:52.107060 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 13:29:52 crc kubenswrapper[4851]: I0223 13:29:52.203870 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:29:52 crc kubenswrapper[4851]: I0223 13:29:52.204173 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 13:29:53 crc kubenswrapper[4851]: I0223 13:29:53.118453 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c5dd0b3-902e-4156-9538-fccbb6f319ae" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:29:53 crc kubenswrapper[4851]: I0223 13:29:53.118464 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c5dd0b3-902e-4156-9538-fccbb6f319ae" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:29:56 crc kubenswrapper[4851]: I0223 13:29:56.213398 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 13:29:56 crc kubenswrapper[4851]: I0223 13:29:56.240851 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 13:29:56 crc kubenswrapper[4851]: I0223 13:29:56.862952 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 13:29:57 crc kubenswrapper[4851]: I0223 13:29:57.204966 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 13:29:57 crc kubenswrapper[4851]: I0223 13:29:57.205010 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 13:29:58 crc kubenswrapper[4851]: I0223 13:29:58.217572 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f06b9e12-5e93-4ed8-80f1-733ce28508c1" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:29:58 crc kubenswrapper[4851]: I0223 13:29:58.217569 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f06b9e12-5e93-4ed8-80f1-733ce28508c1" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.153650 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk"] Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.154834 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.156979 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.157057 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.168055 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk"] Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.310455 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/274dfffc-b683-497e-b4f5-42454c1bda65-secret-volume\") pod \"collect-profiles-29530890-thbnk\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.310826 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfnqn\" (UniqueName: \"kubernetes.io/projected/274dfffc-b683-497e-b4f5-42454c1bda65-kube-api-access-vfnqn\") pod \"collect-profiles-29530890-thbnk\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.310861 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/274dfffc-b683-497e-b4f5-42454c1bda65-config-volume\") pod \"collect-profiles-29530890-thbnk\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.412780 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfnqn\" (UniqueName: \"kubernetes.io/projected/274dfffc-b683-497e-b4f5-42454c1bda65-kube-api-access-vfnqn\") pod \"collect-profiles-29530890-thbnk\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.412889 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/274dfffc-b683-497e-b4f5-42454c1bda65-config-volume\") pod \"collect-profiles-29530890-thbnk\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.412929 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/274dfffc-b683-497e-b4f5-42454c1bda65-secret-volume\") pod \"collect-profiles-29530890-thbnk\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.414642 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/274dfffc-b683-497e-b4f5-42454c1bda65-config-volume\") pod \"collect-profiles-29530890-thbnk\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.418286 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/274dfffc-b683-497e-b4f5-42454c1bda65-secret-volume\") pod \"collect-profiles-29530890-thbnk\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.437042 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfnqn\" (UniqueName: \"kubernetes.io/projected/274dfffc-b683-497e-b4f5-42454c1bda65-kube-api-access-vfnqn\") pod \"collect-profiles-29530890-thbnk\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:00 crc kubenswrapper[4851]: I0223 13:30:00.482916 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:01 crc kubenswrapper[4851]: W0223 13:30:01.055672 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod274dfffc_b683_497e_b4f5_42454c1bda65.slice/crio-8d0d9b21606a7a9d0fc4f2c65e24c653e32adf2aa831dfee7c1ac47e37899b15 WatchSource:0}: Error finding container 8d0d9b21606a7a9d0fc4f2c65e24c653e32adf2aa831dfee7c1ac47e37899b15: Status 404 returned error can't find the container with id 8d0d9b21606a7a9d0fc4f2c65e24c653e32adf2aa831dfee7c1ac47e37899b15 Feb 23 13:30:01 crc kubenswrapper[4851]: I0223 13:30:01.062068 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk"] Feb 23 13:30:01 crc kubenswrapper[4851]: I0223 13:30:01.888087 4851 generic.go:334] "Generic (PLEG): container finished" podID="274dfffc-b683-497e-b4f5-42454c1bda65" containerID="6a8f17c1664770601b327f2d59e24ccb722fb499d3868b020fdb016019fb277e" exitCode=0 Feb 23 13:30:01 crc kubenswrapper[4851]: I0223 13:30:01.888125 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" event={"ID":"274dfffc-b683-497e-b4f5-42454c1bda65","Type":"ContainerDied","Data":"6a8f17c1664770601b327f2d59e24ccb722fb499d3868b020fdb016019fb277e"} Feb 23 13:30:01 crc kubenswrapper[4851]: I0223 13:30:01.888398 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" event={"ID":"274dfffc-b683-497e-b4f5-42454c1bda65","Type":"ContainerStarted","Data":"8d0d9b21606a7a9d0fc4f2c65e24c653e32adf2aa831dfee7c1ac47e37899b15"} Feb 23 13:30:02 crc kubenswrapper[4851]: I0223 13:30:02.111675 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 13:30:02 crc kubenswrapper[4851]: I0223 13:30:02.112292 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 13:30:02 crc kubenswrapper[4851]: I0223 13:30:02.112553 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 13:30:02 crc kubenswrapper[4851]: I0223 13:30:02.118676 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 13:30:02 crc kubenswrapper[4851]: I0223 13:30:02.896245 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 13:30:02 crc kubenswrapper[4851]: I0223 13:30:02.903146 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.267073 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.393772 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/274dfffc-b683-497e-b4f5-42454c1bda65-secret-volume\") pod \"274dfffc-b683-497e-b4f5-42454c1bda65\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.393849 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/274dfffc-b683-497e-b4f5-42454c1bda65-config-volume\") pod \"274dfffc-b683-497e-b4f5-42454c1bda65\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.393989 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfnqn\" (UniqueName: \"kubernetes.io/projected/274dfffc-b683-497e-b4f5-42454c1bda65-kube-api-access-vfnqn\") pod \"274dfffc-b683-497e-b4f5-42454c1bda65\" (UID: \"274dfffc-b683-497e-b4f5-42454c1bda65\") " Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.394663 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274dfffc-b683-497e-b4f5-42454c1bda65-config-volume" (OuterVolumeSpecName: "config-volume") pod "274dfffc-b683-497e-b4f5-42454c1bda65" (UID: "274dfffc-b683-497e-b4f5-42454c1bda65"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.398611 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274dfffc-b683-497e-b4f5-42454c1bda65-kube-api-access-vfnqn" (OuterVolumeSpecName: "kube-api-access-vfnqn") pod "274dfffc-b683-497e-b4f5-42454c1bda65" (UID: "274dfffc-b683-497e-b4f5-42454c1bda65"). InnerVolumeSpecName "kube-api-access-vfnqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.398726 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274dfffc-b683-497e-b4f5-42454c1bda65-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "274dfffc-b683-497e-b4f5-42454c1bda65" (UID: "274dfffc-b683-497e-b4f5-42454c1bda65"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.495503 4851 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/274dfffc-b683-497e-b4f5-42454c1bda65-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.495540 4851 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/274dfffc-b683-497e-b4f5-42454c1bda65-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.495550 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfnqn\" (UniqueName: \"kubernetes.io/projected/274dfffc-b683-497e-b4f5-42454c1bda65-kube-api-access-vfnqn\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.905537 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" event={"ID":"274dfffc-b683-497e-b4f5-42454c1bda65","Type":"ContainerDied","Data":"8d0d9b21606a7a9d0fc4f2c65e24c653e32adf2aa831dfee7c1ac47e37899b15"} Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.905582 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0d9b21606a7a9d0fc4f2c65e24c653e32adf2aa831dfee7c1ac47e37899b15" Feb 23 13:30:03 crc kubenswrapper[4851]: I0223 13:30:03.905550 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk" Feb 23 13:30:07 crc kubenswrapper[4851]: I0223 13:30:07.210748 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 13:30:07 crc kubenswrapper[4851]: I0223 13:30:07.211896 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 13:30:07 crc kubenswrapper[4851]: I0223 13:30:07.216873 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 13:30:07 crc kubenswrapper[4851]: I0223 13:30:07.943055 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 13:30:08 crc kubenswrapper[4851]: I0223 13:30:08.024547 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 13:30:11 crc kubenswrapper[4851]: I0223 13:30:11.925029 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:30:11 crc kubenswrapper[4851]: I0223 13:30:11.925540 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:30:11 crc kubenswrapper[4851]: I0223 13:30:11.925578 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:30:11 crc kubenswrapper[4851]: I0223 13:30:11.926110 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58ac070e07fd5f5e92265b5996711448defe16f94a724c465cc2214cdff34234"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:30:11 crc kubenswrapper[4851]: I0223 13:30:11.926157 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://58ac070e07fd5f5e92265b5996711448defe16f94a724c465cc2214cdff34234" gracePeriod=600 Feb 23 13:30:12 crc kubenswrapper[4851]: I0223 13:30:12.990128 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="58ac070e07fd5f5e92265b5996711448defe16f94a724c465cc2214cdff34234" exitCode=0 Feb 23 13:30:12 crc kubenswrapper[4851]: I0223 13:30:12.990214 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"58ac070e07fd5f5e92265b5996711448defe16f94a724c465cc2214cdff34234"} Feb 23 13:30:12 crc kubenswrapper[4851]: I0223 13:30:12.990720 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"e17ebc61652294833ea0e89a5a1e9e10432ee4605526cd8e9e75484945df4bec"} Feb 23 13:30:12 crc kubenswrapper[4851]: I0223 13:30:12.990742 4851 scope.go:117] "RemoveContainer" containerID="60927ae79050568035bcfc1c3f4be4f3b0b6745f639bddbc9d3c155365093c4b" Feb 23 13:30:16 crc kubenswrapper[4851]: I0223 13:30:16.473591 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:30:17 crc kubenswrapper[4851]: I0223 13:30:17.411114 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:30:20 crc kubenswrapper[4851]: I0223 13:30:20.412859 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="46bf34c9-f0ec-4de6-ae40-fd334c23af27" containerName="rabbitmq" containerID="cri-o://f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a" gracePeriod=604797 Feb 23 13:30:21 crc kubenswrapper[4851]: I0223 13:30:21.613367 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ec010635-96e5-448a-98c1-e458fd6f31ed" containerName="rabbitmq" containerID="cri-o://9da6a99616ae7b031212de15b4177e80e168f11ac0fde7fc4f50bbd2312d38b8" gracePeriod=604796 Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.047673 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.113586 4851 generic.go:334] "Generic (PLEG): container finished" podID="46bf34c9-f0ec-4de6-ae40-fd334c23af27" containerID="f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a" exitCode=0 Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.113675 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.113685 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46bf34c9-f0ec-4de6-ae40-fd334c23af27","Type":"ContainerDied","Data":"f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a"} Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.115295 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"46bf34c9-f0ec-4de6-ae40-fd334c23af27","Type":"ContainerDied","Data":"0f79a0b45ef1390a938590bdc7a064da53f3e0d62d797a3c47035f43c10b4335"} Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.115323 4851 scope.go:117] "RemoveContainer" containerID="f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.122598 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db9mn\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-kube-api-access-db9mn\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.122663 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-erlang-cookie\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.122705 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.122776 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-tls\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.122891 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-confd\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.122920 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-plugins-conf\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.122982 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46bf34c9-f0ec-4de6-ae40-fd334c23af27-pod-info\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.123073 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-server-conf\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.123118 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46bf34c9-f0ec-4de6-ae40-fd334c23af27-erlang-cookie-secret\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.123139 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-plugins\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.123163 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-config-data\") pod \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\" (UID: \"46bf34c9-f0ec-4de6-ae40-fd334c23af27\") " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.124017 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.124623 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.129804 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.131076 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.131678 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.133915 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46bf34c9-f0ec-4de6-ae40-fd334c23af27-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.138132 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/46bf34c9-f0ec-4de6-ae40-fd334c23af27-pod-info" (OuterVolumeSpecName: "pod-info") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.139418 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-kube-api-access-db9mn" (OuterVolumeSpecName: "kube-api-access-db9mn") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "kube-api-access-db9mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.190804 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-config-data" (OuterVolumeSpecName: "config-data") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.194388 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-server-conf" (OuterVolumeSpecName: "server-conf") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228755 4851 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46bf34c9-f0ec-4de6-ae40-fd334c23af27-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228793 4851 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228804 4851 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46bf34c9-f0ec-4de6-ae40-fd334c23af27-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228816 4851 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228828 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228838 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db9mn\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-kube-api-access-db9mn\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228848 4851 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228873 4851 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228885 4851 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.228898 4851 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46bf34c9-f0ec-4de6-ae40-fd334c23af27-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.268631 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "46bf34c9-f0ec-4de6-ae40-fd334c23af27" (UID: "46bf34c9-f0ec-4de6-ae40-fd334c23af27"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.269047 4851 scope.go:117] "RemoveContainer" containerID="de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.271936 4851 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.290721 4851 scope.go:117] "RemoveContainer" containerID="f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a" Feb 23 13:30:27 crc kubenswrapper[4851]: E0223 13:30:27.291170 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a\": container with ID starting with f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a not found: ID does not exist" containerID="f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.291216 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a"} err="failed to get container status \"f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a\": rpc error: code = NotFound desc = could not find container \"f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a\": container with ID starting with f604149082d024faa9412126e576f9328389c72c67971e302b4c3cfd5e4a953a not found: ID does not exist" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.291241 4851 scope.go:117] "RemoveContainer" containerID="de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b" Feb 23 13:30:27 crc kubenswrapper[4851]: E0223 13:30:27.291690 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b\": container with ID starting with de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b not found: ID does not exist" containerID="de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.291730 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b"} err="failed to get container status \"de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b\": rpc error: code = NotFound desc = could not find container \"de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b\": container with ID starting with de438ad3ab7fee1624dccf95bc2dcc523bda2dd8f5a2e06b456603ba6bd68d5b not found: ID does not exist" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.330494 4851 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46bf34c9-f0ec-4de6-ae40-fd334c23af27-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.330530 4851 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.457035 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.465022 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.485781 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:30:27 crc kubenswrapper[4851]: E0223 13:30:27.486132 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274dfffc-b683-497e-b4f5-42454c1bda65" containerName="collect-profiles" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.486151 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="274dfffc-b683-497e-b4f5-42454c1bda65" containerName="collect-profiles" Feb 23 13:30:27 crc kubenswrapper[4851]: E0223 13:30:27.486172 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46bf34c9-f0ec-4de6-ae40-fd334c23af27" containerName="rabbitmq" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.486180 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bf34c9-f0ec-4de6-ae40-fd334c23af27" containerName="rabbitmq" Feb 23 13:30:27 crc kubenswrapper[4851]: E0223 13:30:27.486215 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46bf34c9-f0ec-4de6-ae40-fd334c23af27" containerName="setup-container" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.486221 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="46bf34c9-f0ec-4de6-ae40-fd334c23af27" containerName="setup-container" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.486396 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="274dfffc-b683-497e-b4f5-42454c1bda65" containerName="collect-profiles" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.486425 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="46bf34c9-f0ec-4de6-ae40-fd334c23af27" containerName="rabbitmq" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.487321 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.492911 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.493304 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.493652 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.493890 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.494142 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hh5rw" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.494390 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.495242 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.500813 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533282 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44d82832-bb2c-4bfe-a9c0-a22e00484c71-pod-info\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533360 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44d82832-bb2c-4bfe-a9c0-a22e00484c71-server-conf\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533381 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533566 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44d82832-bb2c-4bfe-a9c0-a22e00484c71-config-data\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533586 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbs4l\" (UniqueName: \"kubernetes.io/projected/44d82832-bb2c-4bfe-a9c0-a22e00484c71-kube-api-access-kbs4l\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533616 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533636 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533652 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44d82832-bb2c-4bfe-a9c0-a22e00484c71-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533670 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533695 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.533710 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44d82832-bb2c-4bfe-a9c0-a22e00484c71-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635395 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635466 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635497 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44d82832-bb2c-4bfe-a9c0-a22e00484c71-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635577 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44d82832-bb2c-4bfe-a9c0-a22e00484c71-pod-info\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635614 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44d82832-bb2c-4bfe-a9c0-a22e00484c71-server-conf\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635636 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635690 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44d82832-bb2c-4bfe-a9c0-a22e00484c71-config-data\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635707 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbs4l\" (UniqueName: \"kubernetes.io/projected/44d82832-bb2c-4bfe-a9c0-a22e00484c71-kube-api-access-kbs4l\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635736 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635752 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.635769 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44d82832-bb2c-4bfe-a9c0-a22e00484c71-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.636152 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.636475 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.637191 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44d82832-bb2c-4bfe-a9c0-a22e00484c71-config-data\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.637443 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44d82832-bb2c-4bfe-a9c0-a22e00484c71-server-conf\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.637636 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.638496 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44d82832-bb2c-4bfe-a9c0-a22e00484c71-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.640691 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.641118 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44d82832-bb2c-4bfe-a9c0-a22e00484c71-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.641178 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44d82832-bb2c-4bfe-a9c0-a22e00484c71-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.641591 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44d82832-bb2c-4bfe-a9c0-a22e00484c71-pod-info\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.659818 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbs4l\" (UniqueName: \"kubernetes.io/projected/44d82832-bb2c-4bfe-a9c0-a22e00484c71-kube-api-access-kbs4l\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.667299 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"44d82832-bb2c-4bfe-a9c0-a22e00484c71\") " pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.813984 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.984200 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46bf34c9-f0ec-4de6-ae40-fd334c23af27" path="/var/lib/kubelet/pods/46bf34c9-f0ec-4de6-ae40-fd334c23af27/volumes" Feb 23 13:30:27 crc kubenswrapper[4851]: I0223 13:30:27.984858 4851 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ec010635-96e5-448a-98c1-e458fd6f31ed" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.131477 4851 generic.go:334] "Generic (PLEG): container finished" podID="ec010635-96e5-448a-98c1-e458fd6f31ed" containerID="9da6a99616ae7b031212de15b4177e80e168f11ac0fde7fc4f50bbd2312d38b8" exitCode=0 Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.131596 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec010635-96e5-448a-98c1-e458fd6f31ed","Type":"ContainerDied","Data":"9da6a99616ae7b031212de15b4177e80e168f11ac0fde7fc4f50bbd2312d38b8"} Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.189635 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.248509 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-plugins-conf\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.248580 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-tls\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.248611 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-config-data\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.248674 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-confd\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.248716 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-server-conf\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.248792 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec010635-96e5-448a-98c1-e458fd6f31ed-erlang-cookie-secret\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.248820 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec010635-96e5-448a-98c1-e458fd6f31ed-pod-info\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.248914 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-plugins\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.249563 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-erlang-cookie\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.249606 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.249641 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf625\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-kube-api-access-vf625\") pod \"ec010635-96e5-448a-98c1-e458fd6f31ed\" (UID: \"ec010635-96e5-448a-98c1-e458fd6f31ed\") " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.252261 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.252687 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.252975 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.254647 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-kube-api-access-vf625" (OuterVolumeSpecName: "kube-api-access-vf625") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "kube-api-access-vf625". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.257700 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.258537 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.258545 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec010635-96e5-448a-98c1-e458fd6f31ed-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.262534 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ec010635-96e5-448a-98c1-e458fd6f31ed-pod-info" (OuterVolumeSpecName: "pod-info") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.294916 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-config-data" (OuterVolumeSpecName: "config-data") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.312707 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-server-conf" (OuterVolumeSpecName: "server-conf") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.354041 4851 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ec010635-96e5-448a-98c1-e458fd6f31ed-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.358196 4851 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ec010635-96e5-448a-98c1-e458fd6f31ed-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.358447 4851 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.358556 4851 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.358721 4851 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.362697 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ec010635-96e5-448a-98c1-e458fd6f31ed" (UID: "ec010635-96e5-448a-98c1-e458fd6f31ed"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.362977 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf625\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-kube-api-access-vf625\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.363112 4851 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.363214 4851 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.363297 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.363543 4851 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ec010635-96e5-448a-98c1-e458fd6f31ed-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.391846 4851 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.402699 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.465300 4851 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:28 crc kubenswrapper[4851]: I0223 13:30:28.465348 4851 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ec010635-96e5-448a-98c1-e458fd6f31ed-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.147176 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"44d82832-bb2c-4bfe-a9c0-a22e00484c71","Type":"ContainerStarted","Data":"4220f71225cf2ab5a178074dd8864541af2a7994bf059154d31fba79ccfd4eb2"} Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.150679 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ec010635-96e5-448a-98c1-e458fd6f31ed","Type":"ContainerDied","Data":"0cd77ddffdba57d2b601e67fc45cba449da7e88c993a8894579f73b50b00bfc3"} Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.150740 4851 scope.go:117] "RemoveContainer" containerID="9da6a99616ae7b031212de15b4177e80e168f11ac0fde7fc4f50bbd2312d38b8" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.150765 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.174014 4851 scope.go:117] "RemoveContainer" containerID="e273fe812a61abead0849f007f3f26e978b68df2e0939cf7a163e3001984bc7a" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.193732 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.213320 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.242942 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:30:29 crc kubenswrapper[4851]: E0223 13:30:29.243433 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec010635-96e5-448a-98c1-e458fd6f31ed" containerName="rabbitmq" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.243449 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec010635-96e5-448a-98c1-e458fd6f31ed" containerName="rabbitmq" Feb 23 13:30:29 crc kubenswrapper[4851]: E0223 13:30:29.243473 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec010635-96e5-448a-98c1-e458fd6f31ed" containerName="setup-container" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.243479 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec010635-96e5-448a-98c1-e458fd6f31ed" containerName="setup-container" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.243678 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec010635-96e5-448a-98c1-e458fd6f31ed" containerName="rabbitmq" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.244675 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.246660 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.247613 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.248625 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.248764 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.248776 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.248938 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8gl95" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.249150 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.253270 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288157 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2aa1b0e-e4a7-4365-99c9-4e521e896925-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288246 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2aa1b0e-e4a7-4365-99c9-4e521e896925-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288270 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2aa1b0e-e4a7-4365-99c9-4e521e896925-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288304 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288349 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2aa1b0e-e4a7-4365-99c9-4e521e896925-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288374 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288451 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288476 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288506 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25tgs\" (UniqueName: \"kubernetes.io/projected/d2aa1b0e-e4a7-4365-99c9-4e521e896925-kube-api-access-25tgs\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288542 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2aa1b0e-e4a7-4365-99c9-4e521e896925-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.288576 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.390227 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2aa1b0e-e4a7-4365-99c9-4e521e896925-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391576 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391628 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2aa1b0e-e4a7-4365-99c9-4e521e896925-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391663 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2aa1b0e-e4a7-4365-99c9-4e521e896925-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391678 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2aa1b0e-e4a7-4365-99c9-4e521e896925-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391707 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391732 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2aa1b0e-e4a7-4365-99c9-4e521e896925-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391752 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391828 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391846 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.391877 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25tgs\" (UniqueName: \"kubernetes.io/projected/d2aa1b0e-e4a7-4365-99c9-4e521e896925-kube-api-access-25tgs\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.392046 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.392694 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.392808 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.392823 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d2aa1b0e-e4a7-4365-99c9-4e521e896925-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.393025 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d2aa1b0e-e4a7-4365-99c9-4e521e896925-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.393228 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d2aa1b0e-e4a7-4365-99c9-4e521e896925-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.418362 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25tgs\" (UniqueName: \"kubernetes.io/projected/d2aa1b0e-e4a7-4365-99c9-4e521e896925-kube-api-access-25tgs\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.422114 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.456911 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.457025 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d2aa1b0e-e4a7-4365-99c9-4e521e896925-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.457872 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d2aa1b0e-e4a7-4365-99c9-4e521e896925-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.458077 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d2aa1b0e-e4a7-4365-99c9-4e521e896925-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d2aa1b0e-e4a7-4365-99c9-4e521e896925\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.573248 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:30:29 crc kubenswrapper[4851]: I0223 13:30:29.979699 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec010635-96e5-448a-98c1-e458fd6f31ed" path="/var/lib/kubelet/pods/ec010635-96e5-448a-98c1-e458fd6f31ed/volumes" Feb 23 13:30:30 crc kubenswrapper[4851]: I0223 13:30:30.012405 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 13:30:30 crc kubenswrapper[4851]: W0223 13:30:30.015495 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2aa1b0e_e4a7_4365_99c9_4e521e896925.slice/crio-8d7f27f3d1cbdbe48760d57700342b947abe2eb49afe7a3e1d7947e23410ae7a WatchSource:0}: Error finding container 8d7f27f3d1cbdbe48760d57700342b947abe2eb49afe7a3e1d7947e23410ae7a: Status 404 returned error can't find the container with id 8d7f27f3d1cbdbe48760d57700342b947abe2eb49afe7a3e1d7947e23410ae7a Feb 23 13:30:30 crc kubenswrapper[4851]: I0223 13:30:30.162575 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2aa1b0e-e4a7-4365-99c9-4e521e896925","Type":"ContainerStarted","Data":"8d7f27f3d1cbdbe48760d57700342b947abe2eb49afe7a3e1d7947e23410ae7a"} Feb 23 13:30:30 crc kubenswrapper[4851]: I0223 13:30:30.164084 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"44d82832-bb2c-4bfe-a9c0-a22e00484c71","Type":"ContainerStarted","Data":"d4b08387fdd1ed850fd07ef776360a92b3369921dac3b52c96b69b60416fd810"} Feb 23 13:30:32 crc kubenswrapper[4851]: I0223 13:30:32.183536 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2aa1b0e-e4a7-4365-99c9-4e521e896925","Type":"ContainerStarted","Data":"68b9b9c10b11ce75c4a571cee4b567c170a6e917e5ffbd32777cbaa3f67d16a9"} Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.778190 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kw225"] Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.780615 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.789365 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.793620 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kw225"] Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.870083 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.870138 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-config\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.870219 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.870314 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.870384 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.870407 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wvxs\" (UniqueName: \"kubernetes.io/projected/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-kube-api-access-7wvxs\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.870504 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.972497 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.972554 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wvxs\" (UniqueName: \"kubernetes.io/projected/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-kube-api-access-7wvxs\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.972612 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.972638 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.972666 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-config\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.972738 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.972832 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.973562 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.973646 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.974077 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-config\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.974094 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.974079 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.974080 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:33 crc kubenswrapper[4851]: I0223 13:30:33.994583 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wvxs\" (UniqueName: \"kubernetes.io/projected/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-kube-api-access-7wvxs\") pod \"dnsmasq-dns-79bd4cc8c9-kw225\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:34 crc kubenswrapper[4851]: I0223 13:30:34.105920 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:34 crc kubenswrapper[4851]: I0223 13:30:34.556392 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kw225"] Feb 23 13:30:35 crc kubenswrapper[4851]: I0223 13:30:35.234352 4851 generic.go:334] "Generic (PLEG): container finished" podID="b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" containerID="a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c" exitCode=0 Feb 23 13:30:35 crc kubenswrapper[4851]: I0223 13:30:35.234395 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" event={"ID":"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd","Type":"ContainerDied","Data":"a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c"} Feb 23 13:30:35 crc kubenswrapper[4851]: I0223 13:30:35.234581 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" event={"ID":"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd","Type":"ContainerStarted","Data":"c9b857da72644dc1987d34b7ea13a9b50265df6e8162a70ea1230a6a3f6015dc"} Feb 23 13:30:36 crc kubenswrapper[4851]: I0223 13:30:36.245697 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" event={"ID":"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd","Type":"ContainerStarted","Data":"7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86"} Feb 23 13:30:36 crc kubenswrapper[4851]: I0223 13:30:36.246057 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:36 crc kubenswrapper[4851]: I0223 13:30:36.272149 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" podStartSLOduration=3.272130795 podStartE2EDuration="3.272130795s" podCreationTimestamp="2026-02-23 13:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:36.264236953 +0000 UTC m=+1390.945940661" watchObservedRunningTime="2026-02-23 13:30:36.272130795 +0000 UTC m=+1390.953834473" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.107461 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.174867 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmqvl"] Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.308051 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" podUID="7f9baced-54f4-4e5e-ab82-6aed7824b9d7" containerName="dnsmasq-dns" containerID="cri-o://5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f" gracePeriod=10 Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.348397 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-vjdpf"] Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.350279 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.362953 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-vjdpf"] Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.457597 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.457964 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.457999 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-dns-svc\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.458028 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.458048 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-config\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.458062 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.458199 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2gmf\" (UniqueName: \"kubernetes.io/projected/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-kube-api-access-r2gmf\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.559506 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2gmf\" (UniqueName: \"kubernetes.io/projected/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-kube-api-access-r2gmf\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.559595 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.559682 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.559718 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-dns-svc\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.559750 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.559771 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-config\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.559787 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.560832 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.560881 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.560883 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-dns-svc\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.560951 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.561021 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-config\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.561090 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.577136 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2gmf\" (UniqueName: \"kubernetes.io/projected/b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f-kube-api-access-r2gmf\") pod \"dnsmasq-dns-55478c4467-vjdpf\" (UID: \"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f\") " pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.696794 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.818858 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.968550 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78znt\" (UniqueName: \"kubernetes.io/projected/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-kube-api-access-78znt\") pod \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.968618 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-sb\") pod \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.968786 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-swift-storage-0\") pod \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.968819 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-svc\") pod \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.968933 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-config\") pod \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.968989 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-nb\") pod \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\" (UID: \"7f9baced-54f4-4e5e-ab82-6aed7824b9d7\") " Feb 23 13:30:44 crc kubenswrapper[4851]: I0223 13:30:44.973228 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-kube-api-access-78znt" (OuterVolumeSpecName: "kube-api-access-78znt") pod "7f9baced-54f4-4e5e-ab82-6aed7824b9d7" (UID: "7f9baced-54f4-4e5e-ab82-6aed7824b9d7"). InnerVolumeSpecName "kube-api-access-78znt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.014255 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f9baced-54f4-4e5e-ab82-6aed7824b9d7" (UID: "7f9baced-54f4-4e5e-ab82-6aed7824b9d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.022616 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f9baced-54f4-4e5e-ab82-6aed7824b9d7" (UID: "7f9baced-54f4-4e5e-ab82-6aed7824b9d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.023948 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f9baced-54f4-4e5e-ab82-6aed7824b9d7" (UID: "7f9baced-54f4-4e5e-ab82-6aed7824b9d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.032894 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-config" (OuterVolumeSpecName: "config") pod "7f9baced-54f4-4e5e-ab82-6aed7824b9d7" (UID: "7f9baced-54f4-4e5e-ab82-6aed7824b9d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.045659 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f9baced-54f4-4e5e-ab82-6aed7824b9d7" (UID: "7f9baced-54f4-4e5e-ab82-6aed7824b9d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.071345 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.071381 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.071393 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78znt\" (UniqueName: \"kubernetes.io/projected/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-kube-api-access-78znt\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.071402 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.071411 4851 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.071421 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f9baced-54f4-4e5e-ab82-6aed7824b9d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.152361 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-vjdpf"] Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.317791 4851 generic.go:334] "Generic (PLEG): container finished" podID="7f9baced-54f4-4e5e-ab82-6aed7824b9d7" containerID="5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f" exitCode=0 Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.317900 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.317909 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" event={"ID":"7f9baced-54f4-4e5e-ab82-6aed7824b9d7","Type":"ContainerDied","Data":"5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f"} Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.318518 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-mmqvl" event={"ID":"7f9baced-54f4-4e5e-ab82-6aed7824b9d7","Type":"ContainerDied","Data":"f096e9335bbd7e6600753a2c41a221c2d0d9f110b709d386adfb7ba725b261c5"} Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.318551 4851 scope.go:117] "RemoveContainer" containerID="5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.321071 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-vjdpf" event={"ID":"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f","Type":"ContainerStarted","Data":"5b1e46b4d743aede9d85d628373474e1a9d567a1a43ed8da7c0c1eb6048d7b49"} Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.344191 4851 scope.go:117] "RemoveContainer" containerID="39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.371924 4851 scope.go:117] "RemoveContainer" containerID="5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f" Feb 23 13:30:45 crc kubenswrapper[4851]: E0223 13:30:45.372716 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f\": container with ID starting with 5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f not found: ID does not exist" containerID="5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.372772 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f"} err="failed to get container status \"5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f\": rpc error: code = NotFound desc = could not find container \"5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f\": container with ID starting with 5bfcf028d0ec9395bcd128bf7cd945f54c488ee66c5f151bd399a988170e592f not found: ID does not exist" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.372792 4851 scope.go:117] "RemoveContainer" containerID="39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279" Feb 23 13:30:45 crc kubenswrapper[4851]: E0223 13:30:45.373226 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279\": container with ID starting with 39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279 not found: ID does not exist" containerID="39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.373274 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279"} err="failed to get container status \"39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279\": rpc error: code = NotFound desc = could not find container \"39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279\": container with ID starting with 39d39fb99de2b0857dcd19e5bd43e0ad64727c12863a97018cae0ac76f396279 not found: ID does not exist" Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.375285 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmqvl"] Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.382957 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-mmqvl"] Feb 23 13:30:45 crc kubenswrapper[4851]: I0223 13:30:45.980193 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f9baced-54f4-4e5e-ab82-6aed7824b9d7" path="/var/lib/kubelet/pods/7f9baced-54f4-4e5e-ab82-6aed7824b9d7/volumes" Feb 23 13:30:46 crc kubenswrapper[4851]: I0223 13:30:46.331988 4851 generic.go:334] "Generic (PLEG): container finished" podID="b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f" containerID="c2ee11c50544cd683d5a3a88fa6bfd44e1a2b0710c149d7af4ae13b0930c1e34" exitCode=0 Feb 23 13:30:46 crc kubenswrapper[4851]: I0223 13:30:46.332027 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-vjdpf" event={"ID":"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f","Type":"ContainerDied","Data":"c2ee11c50544cd683d5a3a88fa6bfd44e1a2b0710c149d7af4ae13b0930c1e34"} Feb 23 13:30:47 crc kubenswrapper[4851]: I0223 13:30:47.345998 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-vjdpf" event={"ID":"b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f","Type":"ContainerStarted","Data":"2dd878d67f6c67e1fad8eb87828562ca1e459e6e661f228eaa4f3465047fb50e"} Feb 23 13:30:47 crc kubenswrapper[4851]: I0223 13:30:47.346439 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:47 crc kubenswrapper[4851]: I0223 13:30:47.365137 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-vjdpf" podStartSLOduration=3.365119053 podStartE2EDuration="3.365119053s" podCreationTimestamp="2026-02-23 13:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:30:47.364247299 +0000 UTC m=+1402.045950997" watchObservedRunningTime="2026-02-23 13:30:47.365119053 +0000 UTC m=+1402.046822741" Feb 23 13:30:54 crc kubenswrapper[4851]: I0223 13:30:54.698555 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-vjdpf" Feb 23 13:30:54 crc kubenswrapper[4851]: I0223 13:30:54.798518 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kw225"] Feb 23 13:30:54 crc kubenswrapper[4851]: I0223 13:30:54.798794 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" podUID="b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" containerName="dnsmasq-dns" containerID="cri-o://7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86" gracePeriod=10 Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.281128 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.376145 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-openstack-edpm-ipam\") pod \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.376201 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-config\") pod \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.376243 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-sb\") pod \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.376349 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-svc\") pod \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.376475 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-swift-storage-0\") pod \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.376501 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-nb\") pod \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.376564 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wvxs\" (UniqueName: \"kubernetes.io/projected/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-kube-api-access-7wvxs\") pod \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\" (UID: \"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd\") " Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.381640 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-kube-api-access-7wvxs" (OuterVolumeSpecName: "kube-api-access-7wvxs") pod "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" (UID: "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd"). InnerVolumeSpecName "kube-api-access-7wvxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.424239 4851 generic.go:334] "Generic (PLEG): container finished" podID="b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" containerID="7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86" exitCode=0 Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.424283 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" event={"ID":"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd","Type":"ContainerDied","Data":"7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86"} Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.424309 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" event={"ID":"b5b365fa-5ec6-46bd-a4b1-6855571b2fdd","Type":"ContainerDied","Data":"c9b857da72644dc1987d34b7ea13a9b50265df6e8162a70ea1230a6a3f6015dc"} Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.424440 4851 scope.go:117] "RemoveContainer" containerID="7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.424566 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kw225" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.428632 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" (UID: "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.437622 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" (UID: "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.437678 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" (UID: "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.438610 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-config" (OuterVolumeSpecName: "config") pod "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" (UID: "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.472912 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" (UID: "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.472938 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" (UID: "b5b365fa-5ec6-46bd-a4b1-6855571b2fdd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.478679 4851 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.478708 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.478719 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wvxs\" (UniqueName: \"kubernetes.io/projected/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-kube-api-access-7wvxs\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.478728 4851 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.478739 4851 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-config\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.478747 4851 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.478754 4851 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.505584 4851 scope.go:117] "RemoveContainer" containerID="a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.527812 4851 scope.go:117] "RemoveContainer" containerID="7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86" Feb 23 13:30:55 crc kubenswrapper[4851]: E0223 13:30:55.528199 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86\": container with ID starting with 7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86 not found: ID does not exist" containerID="7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.528237 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86"} err="failed to get container status \"7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86\": rpc error: code = NotFound desc = could not find container \"7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86\": container with ID starting with 7374df6232f1e3209725994c224996b9aaf345e272053231321ab5bc1e0ccf86 not found: ID does not exist" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.528264 4851 scope.go:117] "RemoveContainer" containerID="a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c" Feb 23 13:30:55 crc kubenswrapper[4851]: E0223 13:30:55.528560 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c\": container with ID starting with a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c not found: ID does not exist" containerID="a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.528579 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c"} err="failed to get container status \"a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c\": rpc error: code = NotFound desc = could not find container \"a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c\": container with ID starting with a4df8f1eb9f8039d26419f63fb5d89cc45de089bcbe97f647cd41dd411204d3c not found: ID does not exist" Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.756165 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kw225"] Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.764081 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kw225"] Feb 23 13:30:55 crc kubenswrapper[4851]: I0223 13:30:55.977952 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" path="/var/lib/kubelet/pods/b5b365fa-5ec6-46bd-a4b1-6855571b2fdd/volumes" Feb 23 13:31:02 crc kubenswrapper[4851]: I0223 13:31:02.498958 4851 generic.go:334] "Generic (PLEG): container finished" podID="44d82832-bb2c-4bfe-a9c0-a22e00484c71" containerID="d4b08387fdd1ed850fd07ef776360a92b3369921dac3b52c96b69b60416fd810" exitCode=0 Feb 23 13:31:02 crc kubenswrapper[4851]: I0223 13:31:02.499100 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"44d82832-bb2c-4bfe-a9c0-a22e00484c71","Type":"ContainerDied","Data":"d4b08387fdd1ed850fd07ef776360a92b3369921dac3b52c96b69b60416fd810"} Feb 23 13:31:03 crc kubenswrapper[4851]: I0223 13:31:03.509928 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"44d82832-bb2c-4bfe-a9c0-a22e00484c71","Type":"ContainerStarted","Data":"06d89401c017160a8967d57b3416c5436c60a079f3ca65bc34cea6983331f403"} Feb 23 13:31:03 crc kubenswrapper[4851]: I0223 13:31:03.510533 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 13:31:03 crc kubenswrapper[4851]: I0223 13:31:03.536496 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.536475332 podStartE2EDuration="36.536475332s" podCreationTimestamp="2026-02-23 13:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:03.531208024 +0000 UTC m=+1418.212911722" watchObservedRunningTime="2026-02-23 13:31:03.536475332 +0000 UTC m=+1418.218179020" Feb 23 13:31:04 crc kubenswrapper[4851]: I0223 13:31:04.520844 4851 generic.go:334] "Generic (PLEG): container finished" podID="d2aa1b0e-e4a7-4365-99c9-4e521e896925" containerID="68b9b9c10b11ce75c4a571cee4b567c170a6e917e5ffbd32777cbaa3f67d16a9" exitCode=0 Feb 23 13:31:04 crc kubenswrapper[4851]: I0223 13:31:04.520919 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2aa1b0e-e4a7-4365-99c9-4e521e896925","Type":"ContainerDied","Data":"68b9b9c10b11ce75c4a571cee4b567c170a6e917e5ffbd32777cbaa3f67d16a9"} Feb 23 13:31:05 crc kubenswrapper[4851]: I0223 13:31:05.529417 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d2aa1b0e-e4a7-4365-99c9-4e521e896925","Type":"ContainerStarted","Data":"57a618ee7dfdd7f701bcee6a34fddf02adbc7e811c65490ff7430b893d3bda6c"} Feb 23 13:31:05 crc kubenswrapper[4851]: I0223 13:31:05.530213 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:31:05 crc kubenswrapper[4851]: I0223 13:31:05.551041 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.551022325 podStartE2EDuration="36.551022325s" podCreationTimestamp="2026-02-23 13:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 13:31:05.550736937 +0000 UTC m=+1420.232440625" watchObservedRunningTime="2026-02-23 13:31:05.551022325 +0000 UTC m=+1420.232726003" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.702693 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj"] Feb 23 13:31:07 crc kubenswrapper[4851]: E0223 13:31:07.703343 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9baced-54f4-4e5e-ab82-6aed7824b9d7" containerName="init" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.703355 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9baced-54f4-4e5e-ab82-6aed7824b9d7" containerName="init" Feb 23 13:31:07 crc kubenswrapper[4851]: E0223 13:31:07.703375 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" containerName="init" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.703381 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" containerName="init" Feb 23 13:31:07 crc kubenswrapper[4851]: E0223 13:31:07.703399 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f9baced-54f4-4e5e-ab82-6aed7824b9d7" containerName="dnsmasq-dns" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.703405 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f9baced-54f4-4e5e-ab82-6aed7824b9d7" containerName="dnsmasq-dns" Feb 23 13:31:07 crc kubenswrapper[4851]: E0223 13:31:07.703428 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" containerName="dnsmasq-dns" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.703433 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" containerName="dnsmasq-dns" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.703604 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f9baced-54f4-4e5e-ab82-6aed7824b9d7" containerName="dnsmasq-dns" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.703616 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b365fa-5ec6-46bd-a4b1-6855571b2fdd" containerName="dnsmasq-dns" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.704210 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.706105 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.706438 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.706642 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.707114 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.719936 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj"] Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.802275 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.802356 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnmc4\" (UniqueName: \"kubernetes.io/projected/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-kube-api-access-pnmc4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.802400 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.802540 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.904748 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.904823 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.904914 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.904955 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnmc4\" (UniqueName: \"kubernetes.io/projected/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-kube-api-access-pnmc4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.910689 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.916853 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.917216 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:07 crc kubenswrapper[4851]: I0223 13:31:07.920271 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnmc4\" (UniqueName: \"kubernetes.io/projected/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-kube-api-access-pnmc4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:08 crc kubenswrapper[4851]: I0223 13:31:08.027451 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:08 crc kubenswrapper[4851]: I0223 13:31:08.635699 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj"] Feb 23 13:31:08 crc kubenswrapper[4851]: I0223 13:31:08.636034 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:31:09 crc kubenswrapper[4851]: I0223 13:31:09.568879 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" event={"ID":"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a","Type":"ContainerStarted","Data":"91ac7c2533a6febc125f50be66e609b4091083c94f8f53d684f6204aa16f74f1"} Feb 23 13:31:17 crc kubenswrapper[4851]: I0223 13:31:17.816527 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 13:31:18 crc kubenswrapper[4851]: I0223 13:31:18.654400 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" event={"ID":"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a","Type":"ContainerStarted","Data":"6b460b6f8a599f76536d42942a3a144ec93d26df818e24a229b67977b20e2448"} Feb 23 13:31:18 crc kubenswrapper[4851]: I0223 13:31:18.680553 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" podStartSLOduration=2.8125590750000002 podStartE2EDuration="11.680536178s" podCreationTimestamp="2026-02-23 13:31:07 +0000 UTC" firstStartedPulling="2026-02-23 13:31:08.635501943 +0000 UTC m=+1423.317205621" lastFinishedPulling="2026-02-23 13:31:17.503479046 +0000 UTC m=+1432.185182724" observedRunningTime="2026-02-23 13:31:18.669659222 +0000 UTC m=+1433.351362900" watchObservedRunningTime="2026-02-23 13:31:18.680536178 +0000 UTC m=+1433.362239856" Feb 23 13:31:19 crc kubenswrapper[4851]: I0223 13:31:19.576544 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 13:31:28 crc kubenswrapper[4851]: I0223 13:31:28.737539 4851 generic.go:334] "Generic (PLEG): container finished" podID="0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a" containerID="6b460b6f8a599f76536d42942a3a144ec93d26df818e24a229b67977b20e2448" exitCode=0 Feb 23 13:31:28 crc kubenswrapper[4851]: I0223 13:31:28.738088 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" event={"ID":"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a","Type":"ContainerDied","Data":"6b460b6f8a599f76536d42942a3a144ec93d26df818e24a229b67977b20e2448"} Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.114310 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.233595 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnmc4\" (UniqueName: \"kubernetes.io/projected/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-kube-api-access-pnmc4\") pod \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.233886 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-inventory\") pod \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.234101 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-ssh-key-openstack-edpm-ipam\") pod \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.234373 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-repo-setup-combined-ca-bundle\") pod \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\" (UID: \"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a\") " Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.239619 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-kube-api-access-pnmc4" (OuterVolumeSpecName: "kube-api-access-pnmc4") pod "0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a" (UID: "0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a"). InnerVolumeSpecName "kube-api-access-pnmc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.241672 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a" (UID: "0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.259552 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a" (UID: "0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.265220 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-inventory" (OuterVolumeSpecName: "inventory") pod "0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a" (UID: "0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.337684 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.337714 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.337725 4851 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.337734 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnmc4\" (UniqueName: \"kubernetes.io/projected/0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a-kube-api-access-pnmc4\") on node \"crc\" DevicePath \"\"" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.755724 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" event={"ID":"0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a","Type":"ContainerDied","Data":"91ac7c2533a6febc125f50be66e609b4091083c94f8f53d684f6204aa16f74f1"} Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.756038 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91ac7c2533a6febc125f50be66e609b4091083c94f8f53d684f6204aa16f74f1" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.755798 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.830080 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq"] Feb 23 13:31:30 crc kubenswrapper[4851]: E0223 13:31:30.830459 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.830479 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.830666 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.831250 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.833403 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.833637 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.833942 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.834747 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.844314 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq"] Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.948882 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zgsdq\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.949202 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfj6z\" (UniqueName: \"kubernetes.io/projected/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-kube-api-access-sfj6z\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zgsdq\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:30 crc kubenswrapper[4851]: I0223 13:31:30.949471 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zgsdq\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:31 crc kubenswrapper[4851]: I0223 13:31:31.051616 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zgsdq\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:31 crc kubenswrapper[4851]: I0223 13:31:31.051689 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfj6z\" (UniqueName: \"kubernetes.io/projected/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-kube-api-access-sfj6z\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zgsdq\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:31 crc kubenswrapper[4851]: I0223 13:31:31.051869 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zgsdq\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:31 crc kubenswrapper[4851]: I0223 13:31:31.056725 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zgsdq\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:31 crc kubenswrapper[4851]: I0223 13:31:31.058364 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zgsdq\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:31 crc kubenswrapper[4851]: I0223 13:31:31.068097 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfj6z\" (UniqueName: \"kubernetes.io/projected/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-kube-api-access-sfj6z\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zgsdq\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:31 crc kubenswrapper[4851]: I0223 13:31:31.161290 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:31 crc kubenswrapper[4851]: W0223 13:31:31.653453 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dc4f23f_fc11_4cf6_9740_ec259ac3823e.slice/crio-70ad683491b347e4f30082d62b93ab134768f85a60c2aa0e682a686d8f21c26f WatchSource:0}: Error finding container 70ad683491b347e4f30082d62b93ab134768f85a60c2aa0e682a686d8f21c26f: Status 404 returned error can't find the container with id 70ad683491b347e4f30082d62b93ab134768f85a60c2aa0e682a686d8f21c26f Feb 23 13:31:31 crc kubenswrapper[4851]: I0223 13:31:31.661856 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq"] Feb 23 13:31:31 crc kubenswrapper[4851]: I0223 13:31:31.765238 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" event={"ID":"7dc4f23f-fc11-4cf6-9740-ec259ac3823e","Type":"ContainerStarted","Data":"70ad683491b347e4f30082d62b93ab134768f85a60c2aa0e682a686d8f21c26f"} Feb 23 13:31:32 crc kubenswrapper[4851]: I0223 13:31:32.775742 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" event={"ID":"7dc4f23f-fc11-4cf6-9740-ec259ac3823e","Type":"ContainerStarted","Data":"a01727791bb98d77fbda4bac6fdd22c880fb61ae979544a2a4d0b90acf731e5b"} Feb 23 13:31:32 crc kubenswrapper[4851]: I0223 13:31:32.812026 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" podStartSLOduration=2.403862036 podStartE2EDuration="2.81199984s" podCreationTimestamp="2026-02-23 13:31:30 +0000 UTC" firstStartedPulling="2026-02-23 13:31:31.656921716 +0000 UTC m=+1446.338625384" lastFinishedPulling="2026-02-23 13:31:32.06505951 +0000 UTC m=+1446.746763188" observedRunningTime="2026-02-23 13:31:32.808422939 +0000 UTC m=+1447.490126637" watchObservedRunningTime="2026-02-23 13:31:32.81199984 +0000 UTC m=+1447.493703538" Feb 23 13:31:34 crc kubenswrapper[4851]: I0223 13:31:34.795488 4851 generic.go:334] "Generic (PLEG): container finished" podID="7dc4f23f-fc11-4cf6-9740-ec259ac3823e" containerID="a01727791bb98d77fbda4bac6fdd22c880fb61ae979544a2a4d0b90acf731e5b" exitCode=0 Feb 23 13:31:34 crc kubenswrapper[4851]: I0223 13:31:34.795547 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" event={"ID":"7dc4f23f-fc11-4cf6-9740-ec259ac3823e","Type":"ContainerDied","Data":"a01727791bb98d77fbda4bac6fdd22c880fb61ae979544a2a4d0b90acf731e5b"} Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.265536 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.345881 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-inventory\") pod \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.345943 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfj6z\" (UniqueName: \"kubernetes.io/projected/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-kube-api-access-sfj6z\") pod \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.346088 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-ssh-key-openstack-edpm-ipam\") pod \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\" (UID: \"7dc4f23f-fc11-4cf6-9740-ec259ac3823e\") " Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.354652 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-kube-api-access-sfj6z" (OuterVolumeSpecName: "kube-api-access-sfj6z") pod "7dc4f23f-fc11-4cf6-9740-ec259ac3823e" (UID: "7dc4f23f-fc11-4cf6-9740-ec259ac3823e"). InnerVolumeSpecName "kube-api-access-sfj6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.375616 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-inventory" (OuterVolumeSpecName: "inventory") pod "7dc4f23f-fc11-4cf6-9740-ec259ac3823e" (UID: "7dc4f23f-fc11-4cf6-9740-ec259ac3823e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.378570 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7dc4f23f-fc11-4cf6-9740-ec259ac3823e" (UID: "7dc4f23f-fc11-4cf6-9740-ec259ac3823e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.435972 4851 scope.go:117] "RemoveContainer" containerID="ae52dfb893714d220829e795fa56e055ff1e5fc0c67933d0c69ae12738a94fa2" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.448850 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.448890 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfj6z\" (UniqueName: \"kubernetes.io/projected/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-kube-api-access-sfj6z\") on node \"crc\" DevicePath \"\"" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.448904 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dc4f23f-fc11-4cf6-9740-ec259ac3823e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.457118 4851 scope.go:117] "RemoveContainer" containerID="35160ece8042e45142d86cd3a357f85302d0b42389996e7785a69779f925100b" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.812946 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" event={"ID":"7dc4f23f-fc11-4cf6-9740-ec259ac3823e","Type":"ContainerDied","Data":"70ad683491b347e4f30082d62b93ab134768f85a60c2aa0e682a686d8f21c26f"} Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.812989 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70ad683491b347e4f30082d62b93ab134768f85a60c2aa0e682a686d8f21c26f" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.813239 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zgsdq" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.885055 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9"] Feb 23 13:31:36 crc kubenswrapper[4851]: E0223 13:31:36.885551 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc4f23f-fc11-4cf6-9740-ec259ac3823e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.885575 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc4f23f-fc11-4cf6-9740-ec259ac3823e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.885843 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc4f23f-fc11-4cf6-9740-ec259ac3823e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.886628 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.896014 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9"] Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.896147 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.897615 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.900650 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.900667 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.956659 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.956953 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.957145 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5467\" (UniqueName: \"kubernetes.io/projected/a83f6021-68fd-4a69-8d49-534de4546eee-kube-api-access-z5467\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:36 crc kubenswrapper[4851]: I0223 13:31:36.957264 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.059032 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5467\" (UniqueName: \"kubernetes.io/projected/a83f6021-68fd-4a69-8d49-534de4546eee-kube-api-access-z5467\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.059127 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.059537 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.059566 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.065078 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.065173 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.066876 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.075224 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5467\" (UniqueName: \"kubernetes.io/projected/a83f6021-68fd-4a69-8d49-534de4546eee-kube-api-access-z5467\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.201794 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.731725 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9"] Feb 23 13:31:37 crc kubenswrapper[4851]: I0223 13:31:37.822708 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" event={"ID":"a83f6021-68fd-4a69-8d49-534de4546eee","Type":"ContainerStarted","Data":"f92bfffc67bdb3f6618efb75ce35505c354908f5badf5c83b89363967b59bab0"} Feb 23 13:31:38 crc kubenswrapper[4851]: I0223 13:31:38.831553 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" event={"ID":"a83f6021-68fd-4a69-8d49-534de4546eee","Type":"ContainerStarted","Data":"310b4c84c5df38ac4e312ad015cd92f6eda32986ab4a2669ef039fabc93c5071"} Feb 23 13:31:38 crc kubenswrapper[4851]: I0223 13:31:38.860284 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" podStartSLOduration=2.482106098 podStartE2EDuration="2.86026196s" podCreationTimestamp="2026-02-23 13:31:36 +0000 UTC" firstStartedPulling="2026-02-23 13:31:37.73377007 +0000 UTC m=+1452.415473748" lastFinishedPulling="2026-02-23 13:31:38.111925932 +0000 UTC m=+1452.793629610" observedRunningTime="2026-02-23 13:31:38.854547489 +0000 UTC m=+1453.536251167" watchObservedRunningTime="2026-02-23 13:31:38.86026196 +0000 UTC m=+1453.541965638" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.454025 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rs6bx"] Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.457169 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.464341 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs6bx"] Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.549638 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-utilities\") pod \"redhat-marketplace-rs6bx\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.549854 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-catalog-content\") pod \"redhat-marketplace-rs6bx\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.549927 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lklhd\" (UniqueName: \"kubernetes.io/projected/535ce362-e1e5-46ef-9ea4-b69a3c953025-kube-api-access-lklhd\") pod \"redhat-marketplace-rs6bx\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.651380 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-utilities\") pod \"redhat-marketplace-rs6bx\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.651468 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-catalog-content\") pod \"redhat-marketplace-rs6bx\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.651497 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lklhd\" (UniqueName: \"kubernetes.io/projected/535ce362-e1e5-46ef-9ea4-b69a3c953025-kube-api-access-lklhd\") pod \"redhat-marketplace-rs6bx\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.652005 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqs6n"] Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.652064 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-utilities\") pod \"redhat-marketplace-rs6bx\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.652015 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-catalog-content\") pod \"redhat-marketplace-rs6bx\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.654215 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.676930 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lklhd\" (UniqueName: \"kubernetes.io/projected/535ce362-e1e5-46ef-9ea4-b69a3c953025-kube-api-access-lklhd\") pod \"redhat-marketplace-rs6bx\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.681544 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqs6n"] Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.752847 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67d75\" (UniqueName: \"kubernetes.io/projected/8729ca56-5cdd-4fc0-abcd-f274b5f30348-kube-api-access-67d75\") pod \"redhat-operators-gqs6n\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.753149 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-utilities\") pod \"redhat-operators-gqs6n\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.753264 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-catalog-content\") pod \"redhat-operators-gqs6n\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.774640 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.854712 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67d75\" (UniqueName: \"kubernetes.io/projected/8729ca56-5cdd-4fc0-abcd-f274b5f30348-kube-api-access-67d75\") pod \"redhat-operators-gqs6n\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.854811 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-utilities\") pod \"redhat-operators-gqs6n\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.854974 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-catalog-content\") pod \"redhat-operators-gqs6n\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.855576 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-catalog-content\") pod \"redhat-operators-gqs6n\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.855576 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-utilities\") pod \"redhat-operators-gqs6n\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.890290 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67d75\" (UniqueName: \"kubernetes.io/projected/8729ca56-5cdd-4fc0-abcd-f274b5f30348-kube-api-access-67d75\") pod \"redhat-operators-gqs6n\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:06 crc kubenswrapper[4851]: I0223 13:32:06.971672 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:07 crc kubenswrapper[4851]: I0223 13:32:07.301834 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs6bx"] Feb 23 13:32:07 crc kubenswrapper[4851]: I0223 13:32:07.475364 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqs6n"] Feb 23 13:32:07 crc kubenswrapper[4851]: W0223 13:32:07.483948 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8729ca56_5cdd_4fc0_abcd_f274b5f30348.slice/crio-e2dacded9ec4526572927faf5bc35addf3057c3504902fec0a8b54b3569f8596 WatchSource:0}: Error finding container e2dacded9ec4526572927faf5bc35addf3057c3504902fec0a8b54b3569f8596: Status 404 returned error can't find the container with id e2dacded9ec4526572927faf5bc35addf3057c3504902fec0a8b54b3569f8596 Feb 23 13:32:08 crc kubenswrapper[4851]: I0223 13:32:08.078535 4851 generic.go:334] "Generic (PLEG): container finished" podID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerID="433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8" exitCode=0 Feb 23 13:32:08 crc kubenswrapper[4851]: I0223 13:32:08.078621 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs6bx" event={"ID":"535ce362-e1e5-46ef-9ea4-b69a3c953025","Type":"ContainerDied","Data":"433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8"} Feb 23 13:32:08 crc kubenswrapper[4851]: I0223 13:32:08.078647 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs6bx" event={"ID":"535ce362-e1e5-46ef-9ea4-b69a3c953025","Type":"ContainerStarted","Data":"8e8e177b4e8874f62634ef375aca3671ec585cb59bbcb630606bc637cf9179a7"} Feb 23 13:32:08 crc kubenswrapper[4851]: I0223 13:32:08.082260 4851 generic.go:334] "Generic (PLEG): container finished" podID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerID="2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b" exitCode=0 Feb 23 13:32:08 crc kubenswrapper[4851]: I0223 13:32:08.082310 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqs6n" event={"ID":"8729ca56-5cdd-4fc0-abcd-f274b5f30348","Type":"ContainerDied","Data":"2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b"} Feb 23 13:32:08 crc kubenswrapper[4851]: I0223 13:32:08.082360 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqs6n" event={"ID":"8729ca56-5cdd-4fc0-abcd-f274b5f30348","Type":"ContainerStarted","Data":"e2dacded9ec4526572927faf5bc35addf3057c3504902fec0a8b54b3569f8596"} Feb 23 13:32:09 crc kubenswrapper[4851]: I0223 13:32:09.094546 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqs6n" event={"ID":"8729ca56-5cdd-4fc0-abcd-f274b5f30348","Type":"ContainerStarted","Data":"d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20"} Feb 23 13:32:09 crc kubenswrapper[4851]: E0223 13:32:09.792876 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8729ca56_5cdd_4fc0_abcd_f274b5f30348.slice/crio-d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20.scope\": RecentStats: unable to find data in memory cache]" Feb 23 13:32:10 crc kubenswrapper[4851]: I0223 13:32:10.103273 4851 generic.go:334] "Generic (PLEG): container finished" podID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerID="164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962" exitCode=0 Feb 23 13:32:10 crc kubenswrapper[4851]: I0223 13:32:10.103393 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs6bx" event={"ID":"535ce362-e1e5-46ef-9ea4-b69a3c953025","Type":"ContainerDied","Data":"164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962"} Feb 23 13:32:11 crc kubenswrapper[4851]: I0223 13:32:11.114948 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs6bx" event={"ID":"535ce362-e1e5-46ef-9ea4-b69a3c953025","Type":"ContainerStarted","Data":"edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba"} Feb 23 13:32:11 crc kubenswrapper[4851]: I0223 13:32:11.138678 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rs6bx" podStartSLOduration=2.656083658 podStartE2EDuration="5.138659243s" podCreationTimestamp="2026-02-23 13:32:06 +0000 UTC" firstStartedPulling="2026-02-23 13:32:08.080293 +0000 UTC m=+1482.761996718" lastFinishedPulling="2026-02-23 13:32:10.562868625 +0000 UTC m=+1485.244572303" observedRunningTime="2026-02-23 13:32:11.132462359 +0000 UTC m=+1485.814166057" watchObservedRunningTime="2026-02-23 13:32:11.138659243 +0000 UTC m=+1485.820362921" Feb 23 13:32:12 crc kubenswrapper[4851]: I0223 13:32:12.126630 4851 generic.go:334] "Generic (PLEG): container finished" podID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerID="d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20" exitCode=0 Feb 23 13:32:12 crc kubenswrapper[4851]: I0223 13:32:12.126887 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqs6n" event={"ID":"8729ca56-5cdd-4fc0-abcd-f274b5f30348","Type":"ContainerDied","Data":"d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20"} Feb 23 13:32:13 crc kubenswrapper[4851]: I0223 13:32:13.136556 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqs6n" event={"ID":"8729ca56-5cdd-4fc0-abcd-f274b5f30348","Type":"ContainerStarted","Data":"d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3"} Feb 23 13:32:13 crc kubenswrapper[4851]: I0223 13:32:13.157940 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqs6n" podStartSLOduration=2.51955259 podStartE2EDuration="7.157917993s" podCreationTimestamp="2026-02-23 13:32:06 +0000 UTC" firstStartedPulling="2026-02-23 13:32:08.084216271 +0000 UTC m=+1482.765919949" lastFinishedPulling="2026-02-23 13:32:12.722581674 +0000 UTC m=+1487.404285352" observedRunningTime="2026-02-23 13:32:13.155108144 +0000 UTC m=+1487.836811842" watchObservedRunningTime="2026-02-23 13:32:13.157917993 +0000 UTC m=+1487.839621671" Feb 23 13:32:16 crc kubenswrapper[4851]: I0223 13:32:16.775440 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:16 crc kubenswrapper[4851]: I0223 13:32:16.775726 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:16 crc kubenswrapper[4851]: I0223 13:32:16.835762 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:16 crc kubenswrapper[4851]: I0223 13:32:16.972771 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:16 crc kubenswrapper[4851]: I0223 13:32:16.973046 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:17 crc kubenswrapper[4851]: I0223 13:32:17.267636 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:17 crc kubenswrapper[4851]: I0223 13:32:17.319768 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs6bx"] Feb 23 13:32:18 crc kubenswrapper[4851]: I0223 13:32:18.022087 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gqs6n" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerName="registry-server" probeResult="failure" output=< Feb 23 13:32:18 crc kubenswrapper[4851]: timeout: failed to connect service ":50051" within 1s Feb 23 13:32:18 crc kubenswrapper[4851]: > Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.224569 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rs6bx" podUID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerName="registry-server" containerID="cri-o://edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba" gracePeriod=2 Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.671825 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.777420 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lklhd\" (UniqueName: \"kubernetes.io/projected/535ce362-e1e5-46ef-9ea4-b69a3c953025-kube-api-access-lklhd\") pod \"535ce362-e1e5-46ef-9ea4-b69a3c953025\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.777742 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-utilities\") pod \"535ce362-e1e5-46ef-9ea4-b69a3c953025\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.777806 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-catalog-content\") pod \"535ce362-e1e5-46ef-9ea4-b69a3c953025\" (UID: \"535ce362-e1e5-46ef-9ea4-b69a3c953025\") " Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.778291 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-utilities" (OuterVolumeSpecName: "utilities") pod "535ce362-e1e5-46ef-9ea4-b69a3c953025" (UID: "535ce362-e1e5-46ef-9ea4-b69a3c953025"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.787212 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535ce362-e1e5-46ef-9ea4-b69a3c953025-kube-api-access-lklhd" (OuterVolumeSpecName: "kube-api-access-lklhd") pod "535ce362-e1e5-46ef-9ea4-b69a3c953025" (UID: "535ce362-e1e5-46ef-9ea4-b69a3c953025"). InnerVolumeSpecName "kube-api-access-lklhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.796963 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "535ce362-e1e5-46ef-9ea4-b69a3c953025" (UID: "535ce362-e1e5-46ef-9ea4-b69a3c953025"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.879648 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lklhd\" (UniqueName: \"kubernetes.io/projected/535ce362-e1e5-46ef-9ea4-b69a3c953025-kube-api-access-lklhd\") on node \"crc\" DevicePath \"\"" Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.879692 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:32:19 crc kubenswrapper[4851]: I0223 13:32:19.879723 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/535ce362-e1e5-46ef-9ea4-b69a3c953025-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:32:20 crc kubenswrapper[4851]: E0223 13:32:20.045546 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod535ce362_e1e5_46ef_9ea4_b69a3c953025.slice/crio-8e8e177b4e8874f62634ef375aca3671ec585cb59bbcb630606bc637cf9179a7\": RecentStats: unable to find data in memory cache]" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.234108 4851 generic.go:334] "Generic (PLEG): container finished" podID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerID="edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba" exitCode=0 Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.234149 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs6bx" event={"ID":"535ce362-e1e5-46ef-9ea4-b69a3c953025","Type":"ContainerDied","Data":"edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba"} Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.234191 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs6bx" event={"ID":"535ce362-e1e5-46ef-9ea4-b69a3c953025","Type":"ContainerDied","Data":"8e8e177b4e8874f62634ef375aca3671ec585cb59bbcb630606bc637cf9179a7"} Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.234195 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs6bx" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.234208 4851 scope.go:117] "RemoveContainer" containerID="edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.259464 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs6bx"] Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.263199 4851 scope.go:117] "RemoveContainer" containerID="164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.276425 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs6bx"] Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.289201 4851 scope.go:117] "RemoveContainer" containerID="433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.329822 4851 scope.go:117] "RemoveContainer" containerID="edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba" Feb 23 13:32:20 crc kubenswrapper[4851]: E0223 13:32:20.330520 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba\": container with ID starting with edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba not found: ID does not exist" containerID="edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.330592 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba"} err="failed to get container status \"edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba\": rpc error: code = NotFound desc = could not find container \"edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba\": container with ID starting with edda846e3c6393cabfcc90e7157199a4a3d38121ea3a9fad403de1e593a3fbba not found: ID does not exist" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.330624 4851 scope.go:117] "RemoveContainer" containerID="164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962" Feb 23 13:32:20 crc kubenswrapper[4851]: E0223 13:32:20.330976 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962\": container with ID starting with 164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962 not found: ID does not exist" containerID="164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.331035 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962"} err="failed to get container status \"164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962\": rpc error: code = NotFound desc = could not find container \"164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962\": container with ID starting with 164a24ac571ca8b6fff607f5e8af95186e68ced3cc7e0ae893959bfb12cef962 not found: ID does not exist" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.331074 4851 scope.go:117] "RemoveContainer" containerID="433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8" Feb 23 13:32:20 crc kubenswrapper[4851]: E0223 13:32:20.331422 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8\": container with ID starting with 433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8 not found: ID does not exist" containerID="433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8" Feb 23 13:32:20 crc kubenswrapper[4851]: I0223 13:32:20.331453 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8"} err="failed to get container status \"433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8\": rpc error: code = NotFound desc = could not find container \"433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8\": container with ID starting with 433b8319c3fb452942835a285954efa99d9a15a8ccc0a2a89adddce152242ac8 not found: ID does not exist" Feb 23 13:32:21 crc kubenswrapper[4851]: I0223 13:32:21.979970 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535ce362-e1e5-46ef-9ea4-b69a3c953025" path="/var/lib/kubelet/pods/535ce362-e1e5-46ef-9ea4-b69a3c953025/volumes" Feb 23 13:32:27 crc kubenswrapper[4851]: I0223 13:32:27.022175 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:27 crc kubenswrapper[4851]: I0223 13:32:27.071195 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.327500 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqs6n"] Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.329476 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqs6n" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerName="registry-server" containerID="cri-o://d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3" gracePeriod=2 Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.538865 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-drxbv"] Feb 23 13:32:31 crc kubenswrapper[4851]: E0223 13:32:31.539740 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerName="extract-utilities" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.539759 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerName="extract-utilities" Feb 23 13:32:31 crc kubenswrapper[4851]: E0223 13:32:31.539788 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerName="registry-server" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.539794 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerName="registry-server" Feb 23 13:32:31 crc kubenswrapper[4851]: E0223 13:32:31.539814 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerName="extract-content" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.539819 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerName="extract-content" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.539977 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="535ce362-e1e5-46ef-9ea4-b69a3c953025" containerName="registry-server" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.541502 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.554289 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drxbv"] Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.577506 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-utilities\") pod \"certified-operators-drxbv\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.577705 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwpmz\" (UniqueName: \"kubernetes.io/projected/25107006-626d-4d88-bba0-92dc5444d247-kube-api-access-dwpmz\") pod \"certified-operators-drxbv\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.577789 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-catalog-content\") pod \"certified-operators-drxbv\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.684484 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-catalog-content\") pod \"certified-operators-drxbv\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.684669 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-utilities\") pod \"certified-operators-drxbv\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.684953 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwpmz\" (UniqueName: \"kubernetes.io/projected/25107006-626d-4d88-bba0-92dc5444d247-kube-api-access-dwpmz\") pod \"certified-operators-drxbv\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.685122 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-catalog-content\") pod \"certified-operators-drxbv\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.685369 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-utilities\") pod \"certified-operators-drxbv\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.707055 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwpmz\" (UniqueName: \"kubernetes.io/projected/25107006-626d-4d88-bba0-92dc5444d247-kube-api-access-dwpmz\") pod \"certified-operators-drxbv\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.785212 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.861435 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.888175 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-utilities\") pod \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.888420 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-catalog-content\") pod \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.888467 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67d75\" (UniqueName: \"kubernetes.io/projected/8729ca56-5cdd-4fc0-abcd-f274b5f30348-kube-api-access-67d75\") pod \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\" (UID: \"8729ca56-5cdd-4fc0-abcd-f274b5f30348\") " Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.889218 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-utilities" (OuterVolumeSpecName: "utilities") pod "8729ca56-5cdd-4fc0-abcd-f274b5f30348" (UID: "8729ca56-5cdd-4fc0-abcd-f274b5f30348"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.895189 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8729ca56-5cdd-4fc0-abcd-f274b5f30348-kube-api-access-67d75" (OuterVolumeSpecName: "kube-api-access-67d75") pod "8729ca56-5cdd-4fc0-abcd-f274b5f30348" (UID: "8729ca56-5cdd-4fc0-abcd-f274b5f30348"). InnerVolumeSpecName "kube-api-access-67d75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.990932 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67d75\" (UniqueName: \"kubernetes.io/projected/8729ca56-5cdd-4fc0-abcd-f274b5f30348-kube-api-access-67d75\") on node \"crc\" DevicePath \"\"" Feb 23 13:32:31 crc kubenswrapper[4851]: I0223 13:32:31.991264 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.035668 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8729ca56-5cdd-4fc0-abcd-f274b5f30348" (UID: "8729ca56-5cdd-4fc0-abcd-f274b5f30348"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.093069 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8729ca56-5cdd-4fc0-abcd-f274b5f30348-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.339514 4851 generic.go:334] "Generic (PLEG): container finished" podID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerID="d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3" exitCode=0 Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.339556 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqs6n" event={"ID":"8729ca56-5cdd-4fc0-abcd-f274b5f30348","Type":"ContainerDied","Data":"d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3"} Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.339589 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqs6n" event={"ID":"8729ca56-5cdd-4fc0-abcd-f274b5f30348","Type":"ContainerDied","Data":"e2dacded9ec4526572927faf5bc35addf3057c3504902fec0a8b54b3569f8596"} Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.339607 4851 scope.go:117] "RemoveContainer" containerID="d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.339739 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqs6n" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.361230 4851 scope.go:117] "RemoveContainer" containerID="d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.380890 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqs6n"] Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.390760 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqs6n"] Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.392282 4851 scope.go:117] "RemoveContainer" containerID="2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.398491 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drxbv"] Feb 23 13:32:32 crc kubenswrapper[4851]: W0223 13:32:32.408377 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25107006_626d_4d88_bba0_92dc5444d247.slice/crio-291da8d4f31270920c75f7cb91a28b7170fd1768ce62ba719c17ec225dd3bfc1 WatchSource:0}: Error finding container 291da8d4f31270920c75f7cb91a28b7170fd1768ce62ba719c17ec225dd3bfc1: Status 404 returned error can't find the container with id 291da8d4f31270920c75f7cb91a28b7170fd1768ce62ba719c17ec225dd3bfc1 Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.419951 4851 scope.go:117] "RemoveContainer" containerID="d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3" Feb 23 13:32:32 crc kubenswrapper[4851]: E0223 13:32:32.420573 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3\": container with ID starting with d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3 not found: ID does not exist" containerID="d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.420628 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3"} err="failed to get container status \"d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3\": rpc error: code = NotFound desc = could not find container \"d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3\": container with ID starting with d67ff78cbb3aeefdee9a7f95a149f397defa112fd75ba1d6e31b8db1bb3824f3 not found: ID does not exist" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.420658 4851 scope.go:117] "RemoveContainer" containerID="d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20" Feb 23 13:32:32 crc kubenswrapper[4851]: E0223 13:32:32.421042 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20\": container with ID starting with d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20 not found: ID does not exist" containerID="d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.421081 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20"} err="failed to get container status \"d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20\": rpc error: code = NotFound desc = could not find container \"d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20\": container with ID starting with d6e06cbd313c37769abf9d7b782f2c1dabfcf03aef3d99aa9737f6415f35ea20 not found: ID does not exist" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.421108 4851 scope.go:117] "RemoveContainer" containerID="2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b" Feb 23 13:32:32 crc kubenswrapper[4851]: E0223 13:32:32.421477 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b\": container with ID starting with 2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b not found: ID does not exist" containerID="2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b" Feb 23 13:32:32 crc kubenswrapper[4851]: I0223 13:32:32.421530 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b"} err="failed to get container status \"2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b\": rpc error: code = NotFound desc = could not find container \"2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b\": container with ID starting with 2caffd64881e5cf5c4f6031ffdb336500e61bc49cd7ec40f04b0c7c24d5cb65b not found: ID does not exist" Feb 23 13:32:33 crc kubenswrapper[4851]: I0223 13:32:33.350161 4851 generic.go:334] "Generic (PLEG): container finished" podID="25107006-626d-4d88-bba0-92dc5444d247" containerID="9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1" exitCode=0 Feb 23 13:32:33 crc kubenswrapper[4851]: I0223 13:32:33.350239 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drxbv" event={"ID":"25107006-626d-4d88-bba0-92dc5444d247","Type":"ContainerDied","Data":"9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1"} Feb 23 13:32:33 crc kubenswrapper[4851]: I0223 13:32:33.350291 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drxbv" event={"ID":"25107006-626d-4d88-bba0-92dc5444d247","Type":"ContainerStarted","Data":"291da8d4f31270920c75f7cb91a28b7170fd1768ce62ba719c17ec225dd3bfc1"} Feb 23 13:32:33 crc kubenswrapper[4851]: I0223 13:32:33.977789 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" path="/var/lib/kubelet/pods/8729ca56-5cdd-4fc0-abcd-f274b5f30348/volumes" Feb 23 13:32:34 crc kubenswrapper[4851]: I0223 13:32:34.363637 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drxbv" event={"ID":"25107006-626d-4d88-bba0-92dc5444d247","Type":"ContainerStarted","Data":"15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf"} Feb 23 13:32:35 crc kubenswrapper[4851]: I0223 13:32:35.374698 4851 generic.go:334] "Generic (PLEG): container finished" podID="25107006-626d-4d88-bba0-92dc5444d247" containerID="15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf" exitCode=0 Feb 23 13:32:35 crc kubenswrapper[4851]: I0223 13:32:35.374788 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drxbv" event={"ID":"25107006-626d-4d88-bba0-92dc5444d247","Type":"ContainerDied","Data":"15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf"} Feb 23 13:32:36 crc kubenswrapper[4851]: I0223 13:32:36.385762 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drxbv" event={"ID":"25107006-626d-4d88-bba0-92dc5444d247","Type":"ContainerStarted","Data":"3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a"} Feb 23 13:32:36 crc kubenswrapper[4851]: I0223 13:32:36.411775 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-drxbv" podStartSLOduration=3.034022912 podStartE2EDuration="5.411665796s" podCreationTimestamp="2026-02-23 13:32:31 +0000 UTC" firstStartedPulling="2026-02-23 13:32:33.352302337 +0000 UTC m=+1508.034006025" lastFinishedPulling="2026-02-23 13:32:35.729945221 +0000 UTC m=+1510.411648909" observedRunningTime="2026-02-23 13:32:36.40435947 +0000 UTC m=+1511.086063168" watchObservedRunningTime="2026-02-23 13:32:36.411665796 +0000 UTC m=+1511.093369474" Feb 23 13:32:36 crc kubenswrapper[4851]: I0223 13:32:36.559027 4851 scope.go:117] "RemoveContainer" containerID="d9908e734ffd1681ee71fa90fd4e0d7755811d0799fafa8695ccaf2f6bc05f0b" Feb 23 13:32:36 crc kubenswrapper[4851]: I0223 13:32:36.596082 4851 scope.go:117] "RemoveContainer" containerID="4774757c90c84af39a4b41cfecb98ad845c4216bb3a362fa221e71db14828f30" Feb 23 13:32:36 crc kubenswrapper[4851]: I0223 13:32:36.627002 4851 scope.go:117] "RemoveContainer" containerID="fc0f15f4f74bd983ed422066f23056436521e781ac25b2ef0008971a01f5c960" Feb 23 13:32:41 crc kubenswrapper[4851]: I0223 13:32:41.861942 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:41 crc kubenswrapper[4851]: I0223 13:32:41.862259 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:41 crc kubenswrapper[4851]: I0223 13:32:41.905560 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:41 crc kubenswrapper[4851]: I0223 13:32:41.925578 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:32:41 crc kubenswrapper[4851]: I0223 13:32:41.925681 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:32:42 crc kubenswrapper[4851]: I0223 13:32:42.476671 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:44 crc kubenswrapper[4851]: I0223 13:32:44.525018 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drxbv"] Feb 23 13:32:44 crc kubenswrapper[4851]: I0223 13:32:44.525250 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-drxbv" podUID="25107006-626d-4d88-bba0-92dc5444d247" containerName="registry-server" containerID="cri-o://3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a" gracePeriod=2 Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.010626 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.049431 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwpmz\" (UniqueName: \"kubernetes.io/projected/25107006-626d-4d88-bba0-92dc5444d247-kube-api-access-dwpmz\") pod \"25107006-626d-4d88-bba0-92dc5444d247\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.049585 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-utilities\") pod \"25107006-626d-4d88-bba0-92dc5444d247\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.049607 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-catalog-content\") pod \"25107006-626d-4d88-bba0-92dc5444d247\" (UID: \"25107006-626d-4d88-bba0-92dc5444d247\") " Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.080576 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25107006-626d-4d88-bba0-92dc5444d247-kube-api-access-dwpmz" (OuterVolumeSpecName: "kube-api-access-dwpmz") pod "25107006-626d-4d88-bba0-92dc5444d247" (UID: "25107006-626d-4d88-bba0-92dc5444d247"). InnerVolumeSpecName "kube-api-access-dwpmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.081792 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-utilities" (OuterVolumeSpecName: "utilities") pod "25107006-626d-4d88-bba0-92dc5444d247" (UID: "25107006-626d-4d88-bba0-92dc5444d247"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.129204 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25107006-626d-4d88-bba0-92dc5444d247" (UID: "25107006-626d-4d88-bba0-92dc5444d247"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.151319 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.151601 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25107006-626d-4d88-bba0-92dc5444d247-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.151614 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwpmz\" (UniqueName: \"kubernetes.io/projected/25107006-626d-4d88-bba0-92dc5444d247-kube-api-access-dwpmz\") on node \"crc\" DevicePath \"\"" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.459392 4851 generic.go:334] "Generic (PLEG): container finished" podID="25107006-626d-4d88-bba0-92dc5444d247" containerID="3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a" exitCode=0 Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.459724 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drxbv" event={"ID":"25107006-626d-4d88-bba0-92dc5444d247","Type":"ContainerDied","Data":"3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a"} Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.459813 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drxbv" event={"ID":"25107006-626d-4d88-bba0-92dc5444d247","Type":"ContainerDied","Data":"291da8d4f31270920c75f7cb91a28b7170fd1768ce62ba719c17ec225dd3bfc1"} Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.459887 4851 scope.go:117] "RemoveContainer" containerID="3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.460051 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drxbv" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.484989 4851 scope.go:117] "RemoveContainer" containerID="15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.494928 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drxbv"] Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.505073 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-drxbv"] Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.520343 4851 scope.go:117] "RemoveContainer" containerID="9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.549515 4851 scope.go:117] "RemoveContainer" containerID="3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a" Feb 23 13:32:45 crc kubenswrapper[4851]: E0223 13:32:45.550094 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a\": container with ID starting with 3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a not found: ID does not exist" containerID="3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.550132 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a"} err="failed to get container status \"3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a\": rpc error: code = NotFound desc = could not find container \"3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a\": container with ID starting with 3091d165fecfec853c52c87e82a77e1e1a2c6cc7ced411e8a2b4fc27edd88d7a not found: ID does not exist" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.550159 4851 scope.go:117] "RemoveContainer" containerID="15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf" Feb 23 13:32:45 crc kubenswrapper[4851]: E0223 13:32:45.550550 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf\": container with ID starting with 15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf not found: ID does not exist" containerID="15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.550584 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf"} err="failed to get container status \"15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf\": rpc error: code = NotFound desc = could not find container \"15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf\": container with ID starting with 15d1325f51965309f0406fffedc417791745c397150e66fbc42c2397cf12cabf not found: ID does not exist" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.550606 4851 scope.go:117] "RemoveContainer" containerID="9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1" Feb 23 13:32:45 crc kubenswrapper[4851]: E0223 13:32:45.550873 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1\": container with ID starting with 9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1 not found: ID does not exist" containerID="9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.550895 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1"} err="failed to get container status \"9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1\": rpc error: code = NotFound desc = could not find container \"9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1\": container with ID starting with 9f2ad7e709e1aee8ec5b9d5de9a344d0e39a85bcbb3b2fbf5324ab002a6769f1 not found: ID does not exist" Feb 23 13:32:45 crc kubenswrapper[4851]: I0223 13:32:45.978941 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25107006-626d-4d88-bba0-92dc5444d247" path="/var/lib/kubelet/pods/25107006-626d-4d88-bba0-92dc5444d247/volumes" Feb 23 13:33:11 crc kubenswrapper[4851]: I0223 13:33:11.924773 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:33:11 crc kubenswrapper[4851]: I0223 13:33:11.925309 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:33:36 crc kubenswrapper[4851]: I0223 13:33:36.739068 4851 scope.go:117] "RemoveContainer" containerID="f07d880cf8324b74c2c29a25e4f5847b72eaa950dac42f93412f72b9ff8aaf47" Feb 23 13:33:36 crc kubenswrapper[4851]: I0223 13:33:36.779925 4851 scope.go:117] "RemoveContainer" containerID="2dd1343c1ba57065aa742f3d9e8b7ef9d19ed64ecbcf7380aec50b0007b0ec73" Feb 23 13:33:36 crc kubenswrapper[4851]: I0223 13:33:36.806800 4851 scope.go:117] "RemoveContainer" containerID="71705e6de5619295269d61be1381c7af0f139a8bd398b834405b849c007f3961" Feb 23 13:33:36 crc kubenswrapper[4851]: I0223 13:33:36.826584 4851 scope.go:117] "RemoveContainer" containerID="1785af1a62c4b7ac03828a6889361660d23027e9ddd0a9cb48f611a2396a75fa" Feb 23 13:33:36 crc kubenswrapper[4851]: I0223 13:33:36.979021 4851 scope.go:117] "RemoveContainer" containerID="e632646867e97abfdbabbccc2035a2cbda5f741d333fae182aa2d58448ee3dd4" Feb 23 13:33:41 crc kubenswrapper[4851]: I0223 13:33:41.925298 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:33:41 crc kubenswrapper[4851]: I0223 13:33:41.925702 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:33:41 crc kubenswrapper[4851]: I0223 13:33:41.925740 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:33:41 crc kubenswrapper[4851]: I0223 13:33:41.926484 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e17ebc61652294833ea0e89a5a1e9e10432ee4605526cd8e9e75484945df4bec"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:33:41 crc kubenswrapper[4851]: I0223 13:33:41.926551 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://e17ebc61652294833ea0e89a5a1e9e10432ee4605526cd8e9e75484945df4bec" gracePeriod=600 Feb 23 13:33:42 crc kubenswrapper[4851]: I0223 13:33:42.976866 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="e17ebc61652294833ea0e89a5a1e9e10432ee4605526cd8e9e75484945df4bec" exitCode=0 Feb 23 13:33:42 crc kubenswrapper[4851]: I0223 13:33:42.976955 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"e17ebc61652294833ea0e89a5a1e9e10432ee4605526cd8e9e75484945df4bec"} Feb 23 13:33:42 crc kubenswrapper[4851]: I0223 13:33:42.977218 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2"} Feb 23 13:33:42 crc kubenswrapper[4851]: I0223 13:33:42.977241 4851 scope.go:117] "RemoveContainer" containerID="58ac070e07fd5f5e92265b5996711448defe16f94a724c465cc2214cdff34234" Feb 23 13:34:23 crc kubenswrapper[4851]: I0223 13:34:23.672948 4851 generic.go:334] "Generic (PLEG): container finished" podID="a83f6021-68fd-4a69-8d49-534de4546eee" containerID="310b4c84c5df38ac4e312ad015cd92f6eda32986ab4a2669ef039fabc93c5071" exitCode=0 Feb 23 13:34:23 crc kubenswrapper[4851]: I0223 13:34:23.673058 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" event={"ID":"a83f6021-68fd-4a69-8d49-534de4546eee","Type":"ContainerDied","Data":"310b4c84c5df38ac4e312ad015cd92f6eda32986ab4a2669ef039fabc93c5071"} Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.090350 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.206637 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-ssh-key-openstack-edpm-ipam\") pod \"a83f6021-68fd-4a69-8d49-534de4546eee\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.206752 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5467\" (UniqueName: \"kubernetes.io/projected/a83f6021-68fd-4a69-8d49-534de4546eee-kube-api-access-z5467\") pod \"a83f6021-68fd-4a69-8d49-534de4546eee\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.206786 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-bootstrap-combined-ca-bundle\") pod \"a83f6021-68fd-4a69-8d49-534de4546eee\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.206863 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-inventory\") pod \"a83f6021-68fd-4a69-8d49-534de4546eee\" (UID: \"a83f6021-68fd-4a69-8d49-534de4546eee\") " Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.214459 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a83f6021-68fd-4a69-8d49-534de4546eee" (UID: "a83f6021-68fd-4a69-8d49-534de4546eee"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.215590 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83f6021-68fd-4a69-8d49-534de4546eee-kube-api-access-z5467" (OuterVolumeSpecName: "kube-api-access-z5467") pod "a83f6021-68fd-4a69-8d49-534de4546eee" (UID: "a83f6021-68fd-4a69-8d49-534de4546eee"). InnerVolumeSpecName "kube-api-access-z5467". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.232876 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-inventory" (OuterVolumeSpecName: "inventory") pod "a83f6021-68fd-4a69-8d49-534de4546eee" (UID: "a83f6021-68fd-4a69-8d49-534de4546eee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.235104 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a83f6021-68fd-4a69-8d49-534de4546eee" (UID: "a83f6021-68fd-4a69-8d49-534de4546eee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.309202 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.309244 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5467\" (UniqueName: \"kubernetes.io/projected/a83f6021-68fd-4a69-8d49-534de4546eee-kube-api-access-z5467\") on node \"crc\" DevicePath \"\"" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.309253 4851 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.309265 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a83f6021-68fd-4a69-8d49-534de4546eee-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.696665 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" event={"ID":"a83f6021-68fd-4a69-8d49-534de4546eee","Type":"ContainerDied","Data":"f92bfffc67bdb3f6618efb75ce35505c354908f5badf5c83b89363967b59bab0"} Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.696712 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f92bfffc67bdb3f6618efb75ce35505c354908f5badf5c83b89363967b59bab0" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.696779 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.776021 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx"] Feb 23 13:34:25 crc kubenswrapper[4851]: E0223 13:34:25.776730 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25107006-626d-4d88-bba0-92dc5444d247" containerName="extract-content" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.776760 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="25107006-626d-4d88-bba0-92dc5444d247" containerName="extract-content" Feb 23 13:34:25 crc kubenswrapper[4851]: E0223 13:34:25.776786 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25107006-626d-4d88-bba0-92dc5444d247" containerName="registry-server" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.776795 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="25107006-626d-4d88-bba0-92dc5444d247" containerName="registry-server" Feb 23 13:34:25 crc kubenswrapper[4851]: E0223 13:34:25.776813 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerName="extract-utilities" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.776822 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerName="extract-utilities" Feb 23 13:34:25 crc kubenswrapper[4851]: E0223 13:34:25.776834 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerName="registry-server" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.776842 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerName="registry-server" Feb 23 13:34:25 crc kubenswrapper[4851]: E0223 13:34:25.776857 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25107006-626d-4d88-bba0-92dc5444d247" containerName="extract-utilities" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.776866 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="25107006-626d-4d88-bba0-92dc5444d247" containerName="extract-utilities" Feb 23 13:34:25 crc kubenswrapper[4851]: E0223 13:34:25.776887 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83f6021-68fd-4a69-8d49-534de4546eee" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.776897 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83f6021-68fd-4a69-8d49-534de4546eee" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 13:34:25 crc kubenswrapper[4851]: E0223 13:34:25.776922 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerName="extract-content" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.776930 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerName="extract-content" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.777170 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="25107006-626d-4d88-bba0-92dc5444d247" containerName="registry-server" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.777191 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83f6021-68fd-4a69-8d49-534de4546eee" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.777214 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="8729ca56-5cdd-4fc0-abcd-f274b5f30348" containerName="registry-server" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.778083 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.782001 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.782151 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.782175 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.782202 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.783916 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx"] Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.816837 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tskm7\" (UniqueName: \"kubernetes.io/projected/1a88cbca-158c-4879-a5ef-48b9714a4043-kube-api-access-tskm7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.817218 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.817314 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.919653 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.919772 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tskm7\" (UniqueName: \"kubernetes.io/projected/1a88cbca-158c-4879-a5ef-48b9714a4043-kube-api-access-tskm7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.919854 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.921764 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.922543 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.935533 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.935951 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tskm7\" (UniqueName: \"kubernetes.io/projected/1a88cbca-158c-4879-a5ef-48b9714a4043-kube-api-access-tskm7\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:25 crc kubenswrapper[4851]: I0223 13:34:25.936971 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:26 crc kubenswrapper[4851]: I0223 13:34:26.102572 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:34:26 crc kubenswrapper[4851]: I0223 13:34:26.111963 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:34:26 crc kubenswrapper[4851]: I0223 13:34:26.656610 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx"] Feb 23 13:34:26 crc kubenswrapper[4851]: I0223 13:34:26.706479 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" event={"ID":"1a88cbca-158c-4879-a5ef-48b9714a4043","Type":"ContainerStarted","Data":"9a95a0c0a2f84db357be8953c0c5b479114f8c03c01893d408a4bbecf2684192"} Feb 23 13:34:27 crc kubenswrapper[4851]: I0223 13:34:27.068756 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:34:27 crc kubenswrapper[4851]: I0223 13:34:27.715613 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" event={"ID":"1a88cbca-158c-4879-a5ef-48b9714a4043","Type":"ContainerStarted","Data":"c9d95e55ae805323c8805bd7fa8d8d1db919aba8a4394d66ad365e7b76ec3afe"} Feb 23 13:34:27 crc kubenswrapper[4851]: I0223 13:34:27.739500 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" podStartSLOduration=2.344464116 podStartE2EDuration="2.739483438s" podCreationTimestamp="2026-02-23 13:34:25 +0000 UTC" firstStartedPulling="2026-02-23 13:34:26.671631074 +0000 UTC m=+1621.353334752" lastFinishedPulling="2026-02-23 13:34:27.066650396 +0000 UTC m=+1621.748354074" observedRunningTime="2026-02-23 13:34:27.735120275 +0000 UTC m=+1622.416823973" watchObservedRunningTime="2026-02-23 13:34:27.739483438 +0000 UTC m=+1622.421187106" Feb 23 13:35:37 crc kubenswrapper[4851]: I0223 13:35:37.075967 4851 scope.go:117] "RemoveContainer" containerID="a7e8111d37499d62c119f7636ae287b26f914e06057dc63662efc4f2d85beccc" Feb 23 13:35:37 crc kubenswrapper[4851]: I0223 13:35:37.113427 4851 scope.go:117] "RemoveContainer" containerID="9f8e92cf331d342af949ca1b98b5942dd0654e6b237998d3ac175714b68ba349" Feb 23 13:35:37 crc kubenswrapper[4851]: I0223 13:35:37.135339 4851 scope.go:117] "RemoveContainer" containerID="905791ceb681eb949bb445c2d5c46d8af82e92f7af5f72d1e97eff78d705fa75" Feb 23 13:35:46 crc kubenswrapper[4851]: I0223 13:35:46.554869 4851 generic.go:334] "Generic (PLEG): container finished" podID="1a88cbca-158c-4879-a5ef-48b9714a4043" containerID="c9d95e55ae805323c8805bd7fa8d8d1db919aba8a4394d66ad365e7b76ec3afe" exitCode=0 Feb 23 13:35:46 crc kubenswrapper[4851]: I0223 13:35:46.556987 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" event={"ID":"1a88cbca-158c-4879-a5ef-48b9714a4043","Type":"ContainerDied","Data":"c9d95e55ae805323c8805bd7fa8d8d1db919aba8a4394d66ad365e7b76ec3afe"} Feb 23 13:35:47 crc kubenswrapper[4851]: I0223 13:35:47.960010 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.092289 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-ssh-key-openstack-edpm-ipam\") pod \"1a88cbca-158c-4879-a5ef-48b9714a4043\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.092572 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-inventory\") pod \"1a88cbca-158c-4879-a5ef-48b9714a4043\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.092604 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tskm7\" (UniqueName: \"kubernetes.io/projected/1a88cbca-158c-4879-a5ef-48b9714a4043-kube-api-access-tskm7\") pod \"1a88cbca-158c-4879-a5ef-48b9714a4043\" (UID: \"1a88cbca-158c-4879-a5ef-48b9714a4043\") " Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.101915 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a88cbca-158c-4879-a5ef-48b9714a4043-kube-api-access-tskm7" (OuterVolumeSpecName: "kube-api-access-tskm7") pod "1a88cbca-158c-4879-a5ef-48b9714a4043" (UID: "1a88cbca-158c-4879-a5ef-48b9714a4043"). InnerVolumeSpecName "kube-api-access-tskm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.118652 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a88cbca-158c-4879-a5ef-48b9714a4043" (UID: "1a88cbca-158c-4879-a5ef-48b9714a4043"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.119318 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-inventory" (OuterVolumeSpecName: "inventory") pod "1a88cbca-158c-4879-a5ef-48b9714a4043" (UID: "1a88cbca-158c-4879-a5ef-48b9714a4043"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.194209 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.194241 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tskm7\" (UniqueName: \"kubernetes.io/projected/1a88cbca-158c-4879-a5ef-48b9714a4043-kube-api-access-tskm7\") on node \"crc\" DevicePath \"\"" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.194252 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a88cbca-158c-4879-a5ef-48b9714a4043-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.575304 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" event={"ID":"1a88cbca-158c-4879-a5ef-48b9714a4043","Type":"ContainerDied","Data":"9a95a0c0a2f84db357be8953c0c5b479114f8c03c01893d408a4bbecf2684192"} Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.575368 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a95a0c0a2f84db357be8953c0c5b479114f8c03c01893d408a4bbecf2684192" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.575473 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.655642 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz"] Feb 23 13:35:48 crc kubenswrapper[4851]: E0223 13:35:48.656100 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a88cbca-158c-4879-a5ef-48b9714a4043" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.656118 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a88cbca-158c-4879-a5ef-48b9714a4043" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.656296 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a88cbca-158c-4879-a5ef-48b9714a4043" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.656956 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.660351 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.660706 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.660821 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.660898 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.672986 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz"] Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.701463 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.701540 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.701565 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rps25\" (UniqueName: \"kubernetes.io/projected/1a7542f5-0c08-40fc-a218-f196e7769853-kube-api-access-rps25\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.803597 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.803639 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rps25\" (UniqueName: \"kubernetes.io/projected/1a7542f5-0c08-40fc-a218-f196e7769853-kube-api-access-rps25\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.803790 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.812740 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.812740 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.819070 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rps25\" (UniqueName: \"kubernetes.io/projected/1a7542f5-0c08-40fc-a218-f196e7769853-kube-api-access-rps25\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:48 crc kubenswrapper[4851]: I0223 13:35:48.980649 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:35:49 crc kubenswrapper[4851]: I0223 13:35:49.498937 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz"] Feb 23 13:35:49 crc kubenswrapper[4851]: I0223 13:35:49.583803 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" event={"ID":"1a7542f5-0c08-40fc-a218-f196e7769853","Type":"ContainerStarted","Data":"cd288f070a76d1633f1ec5e38e7a3ad11b08074dd84b2b71f74ecb82002c671e"} Feb 23 13:35:50 crc kubenswrapper[4851]: I0223 13:35:50.597392 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" event={"ID":"1a7542f5-0c08-40fc-a218-f196e7769853","Type":"ContainerStarted","Data":"29cc4e11ef3789ea74a4a556509fe7f3676f2fee5c8a00a47367e813d7467631"} Feb 23 13:35:50 crc kubenswrapper[4851]: I0223 13:35:50.614605 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" podStartSLOduration=2.183053219 podStartE2EDuration="2.614585565s" podCreationTimestamp="2026-02-23 13:35:48 +0000 UTC" firstStartedPulling="2026-02-23 13:35:49.506964262 +0000 UTC m=+1704.188667940" lastFinishedPulling="2026-02-23 13:35:49.938496608 +0000 UTC m=+1704.620200286" observedRunningTime="2026-02-23 13:35:50.614490152 +0000 UTC m=+1705.296193860" watchObservedRunningTime="2026-02-23 13:35:50.614585565 +0000 UTC m=+1705.296289243" Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.048457 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6ft7k"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.078628 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7ssg8"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.093441 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b5c5-account-create-update-97hg2"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.104860 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d081-account-create-update-xsbtx"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.115503 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-wwt5b"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.126350 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0af4-account-create-update-tklbk"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.136514 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6ft7k"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.144309 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7ssg8"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.154526 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-wwt5b"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.165062 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b5c5-account-create-update-97hg2"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.174602 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d081-account-create-update-xsbtx"] Feb 23 13:35:56 crc kubenswrapper[4851]: I0223 13:35:56.185434 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0af4-account-create-update-tklbk"] Feb 23 13:35:57 crc kubenswrapper[4851]: I0223 13:35:57.978502 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="152d8a17-4503-471d-adf6-8dcbd8d337db" path="/var/lib/kubelet/pods/152d8a17-4503-471d-adf6-8dcbd8d337db/volumes" Feb 23 13:35:57 crc kubenswrapper[4851]: I0223 13:35:57.979043 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c7a1472-be44-4ead-a548-7a377e357ea0" path="/var/lib/kubelet/pods/2c7a1472-be44-4ead-a548-7a377e357ea0/volumes" Feb 23 13:35:57 crc kubenswrapper[4851]: I0223 13:35:57.979557 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ceb3b82-a91b-49e4-8cb3-437e775e1fbc" path="/var/lib/kubelet/pods/6ceb3b82-a91b-49e4-8cb3-437e775e1fbc/volumes" Feb 23 13:35:57 crc kubenswrapper[4851]: I0223 13:35:57.980103 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a270ad09-d30d-4100-be0f-cc026ba47238" path="/var/lib/kubelet/pods/a270ad09-d30d-4100-be0f-cc026ba47238/volumes" Feb 23 13:35:57 crc kubenswrapper[4851]: I0223 13:35:57.981113 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb1aeb9c-194d-4b98-9109-c2474a0a8767" path="/var/lib/kubelet/pods/cb1aeb9c-194d-4b98-9109-c2474a0a8767/volumes" Feb 23 13:35:57 crc kubenswrapper[4851]: I0223 13:35:57.981659 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49c3d1e-e4c6-42c7-8132-30aad920eade" path="/var/lib/kubelet/pods/d49c3d1e-e4c6-42c7-8132-30aad920eade/volumes" Feb 23 13:36:11 crc kubenswrapper[4851]: I0223 13:36:11.925358 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:36:11 crc kubenswrapper[4851]: I0223 13:36:11.925943 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:36:16 crc kubenswrapper[4851]: I0223 13:36:16.028478 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tpxbb"] Feb 23 13:36:16 crc kubenswrapper[4851]: I0223 13:36:16.040200 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tpxbb"] Feb 23 13:36:17 crc kubenswrapper[4851]: I0223 13:36:17.979999 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb17c35-0ab0-4089-9847-d2acfdb17332" path="/var/lib/kubelet/pods/dbb17c35-0ab0-4089-9847-d2acfdb17332/volumes" Feb 23 13:36:22 crc kubenswrapper[4851]: I0223 13:36:22.035166 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6sw4p"] Feb 23 13:36:22 crc kubenswrapper[4851]: I0223 13:36:22.042550 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6sw4p"] Feb 23 13:36:23 crc kubenswrapper[4851]: I0223 13:36:23.983128 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07f3810-fe79-4343-a6c7-0fa6e0281b2d" path="/var/lib/kubelet/pods/b07f3810-fe79-4343-a6c7-0fa6e0281b2d/volumes" Feb 23 13:36:37 crc kubenswrapper[4851]: I0223 13:36:37.201993 4851 scope.go:117] "RemoveContainer" containerID="b09c6d6d23b3aeee601b10d7ac6e559e18764bf963f4ebb98f177785b5b08827" Feb 23 13:36:37 crc kubenswrapper[4851]: I0223 13:36:37.230707 4851 scope.go:117] "RemoveContainer" containerID="f6669806b9cb9484ea7726ba18d82e21bce154589e2b0ca3f749ac37d17d52eb" Feb 23 13:36:37 crc kubenswrapper[4851]: I0223 13:36:37.284801 4851 scope.go:117] "RemoveContainer" containerID="0de3bcafb4fec9d09db18f7c343a8312d2cb3c9c7acd67a4086a3f4b6203dc93" Feb 23 13:36:37 crc kubenswrapper[4851]: I0223 13:36:37.328872 4851 scope.go:117] "RemoveContainer" containerID="709a3599d3a247d11b2017c42124e403ad4a01c01ca959da61eb61a4718eb3dc" Feb 23 13:36:37 crc kubenswrapper[4851]: I0223 13:36:37.371433 4851 scope.go:117] "RemoveContainer" containerID="2b04fee2c209efb5ee8f0342572ed47679ea8236d73e963cfff81f56bd730391" Feb 23 13:36:37 crc kubenswrapper[4851]: I0223 13:36:37.411105 4851 scope.go:117] "RemoveContainer" containerID="a4a565f024bccf2aa765facf981ee999208b9f4cf418c2b0c54ff80aed0467ab" Feb 23 13:36:37 crc kubenswrapper[4851]: I0223 13:36:37.457787 4851 scope.go:117] "RemoveContainer" containerID="2a1bc1ca0bfea6d912442719076208c8c61f56dc08b5936b91919337e368c305" Feb 23 13:36:37 crc kubenswrapper[4851]: I0223 13:36:37.484531 4851 scope.go:117] "RemoveContainer" containerID="54ace890021796066e431874b7676a56927731a7847fe88750045628c2f20947" Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.061888 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-pqvpq"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.072409 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0f44-account-create-update-ptrt5"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.080595 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-044e-account-create-update-hq2hn"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.089295 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7231-account-create-update-whd9p"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.097208 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-pqvpq"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.105470 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7dwr5"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.113396 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-044e-account-create-update-hq2hn"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.121642 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7dwr5"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.130197 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7231-account-create-update-whd9p"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.137518 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0f44-account-create-update-ptrt5"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.147002 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-x9z4v"] Feb 23 13:36:38 crc kubenswrapper[4851]: I0223 13:36:38.154735 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-x9z4v"] Feb 23 13:36:39 crc kubenswrapper[4851]: I0223 13:36:39.986412 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae4a32c-745e-4a9e-a3ca-226b8890d6ad" path="/var/lib/kubelet/pods/0ae4a32c-745e-4a9e-a3ca-226b8890d6ad/volumes" Feb 23 13:36:39 crc kubenswrapper[4851]: I0223 13:36:39.988108 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ef6fd7-8171-4e6b-9cda-5b8610248ca2" path="/var/lib/kubelet/pods/39ef6fd7-8171-4e6b-9cda-5b8610248ca2/volumes" Feb 23 13:36:39 crc kubenswrapper[4851]: I0223 13:36:39.989687 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec82359-d307-4ee4-8dc7-a9db6d393244" path="/var/lib/kubelet/pods/3ec82359-d307-4ee4-8dc7-a9db6d393244/volumes" Feb 23 13:36:39 crc kubenswrapper[4851]: I0223 13:36:39.990877 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4" path="/var/lib/kubelet/pods/86b5ffac-c8d0-4b5c-9f28-daba9dfb6be4/volumes" Feb 23 13:36:39 crc kubenswrapper[4851]: I0223 13:36:39.992681 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88533393-d631-48b0-b09f-883391965b09" path="/var/lib/kubelet/pods/88533393-d631-48b0-b09f-883391965b09/volumes" Feb 23 13:36:39 crc kubenswrapper[4851]: I0223 13:36:39.993750 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e353fec8-4196-4baa-8f02-878651e9bcc5" path="/var/lib/kubelet/pods/e353fec8-4196-4baa-8f02-878651e9bcc5/volumes" Feb 23 13:36:41 crc kubenswrapper[4851]: I0223 13:36:41.925320 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:36:41 crc kubenswrapper[4851]: I0223 13:36:41.925942 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:36:42 crc kubenswrapper[4851]: I0223 13:36:42.027243 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ht4d4"] Feb 23 13:36:42 crc kubenswrapper[4851]: I0223 13:36:42.038479 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ht4d4"] Feb 23 13:36:43 crc kubenswrapper[4851]: I0223 13:36:43.979806 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8776d848-0f18-44d4-9eaf-3108ca8a79bd" path="/var/lib/kubelet/pods/8776d848-0f18-44d4-9eaf-3108ca8a79bd/volumes" Feb 23 13:36:50 crc kubenswrapper[4851]: I0223 13:36:50.132721 4851 generic.go:334] "Generic (PLEG): container finished" podID="1a7542f5-0c08-40fc-a218-f196e7769853" containerID="29cc4e11ef3789ea74a4a556509fe7f3676f2fee5c8a00a47367e813d7467631" exitCode=0 Feb 23 13:36:50 crc kubenswrapper[4851]: I0223 13:36:50.132806 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" event={"ID":"1a7542f5-0c08-40fc-a218-f196e7769853","Type":"ContainerDied","Data":"29cc4e11ef3789ea74a4a556509fe7f3676f2fee5c8a00a47367e813d7467631"} Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.507935 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.708369 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rps25\" (UniqueName: \"kubernetes.io/projected/1a7542f5-0c08-40fc-a218-f196e7769853-kube-api-access-rps25\") pod \"1a7542f5-0c08-40fc-a218-f196e7769853\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.708445 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-inventory\") pod \"1a7542f5-0c08-40fc-a218-f196e7769853\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.708607 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-ssh-key-openstack-edpm-ipam\") pod \"1a7542f5-0c08-40fc-a218-f196e7769853\" (UID: \"1a7542f5-0c08-40fc-a218-f196e7769853\") " Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.714131 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7542f5-0c08-40fc-a218-f196e7769853-kube-api-access-rps25" (OuterVolumeSpecName: "kube-api-access-rps25") pod "1a7542f5-0c08-40fc-a218-f196e7769853" (UID: "1a7542f5-0c08-40fc-a218-f196e7769853"). InnerVolumeSpecName "kube-api-access-rps25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.736007 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1a7542f5-0c08-40fc-a218-f196e7769853" (UID: "1a7542f5-0c08-40fc-a218-f196e7769853"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.741781 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-inventory" (OuterVolumeSpecName: "inventory") pod "1a7542f5-0c08-40fc-a218-f196e7769853" (UID: "1a7542f5-0c08-40fc-a218-f196e7769853"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.811374 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rps25\" (UniqueName: \"kubernetes.io/projected/1a7542f5-0c08-40fc-a218-f196e7769853-kube-api-access-rps25\") on node \"crc\" DevicePath \"\"" Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.811408 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:36:51 crc kubenswrapper[4851]: I0223 13:36:51.811418 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1a7542f5-0c08-40fc-a218-f196e7769853-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.149221 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" event={"ID":"1a7542f5-0c08-40fc-a218-f196e7769853","Type":"ContainerDied","Data":"cd288f070a76d1633f1ec5e38e7a3ad11b08074dd84b2b71f74ecb82002c671e"} Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.149497 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd288f070a76d1633f1ec5e38e7a3ad11b08074dd84b2b71f74ecb82002c671e" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.149495 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.223016 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr"] Feb 23 13:36:52 crc kubenswrapper[4851]: E0223 13:36:52.224922 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7542f5-0c08-40fc-a218-f196e7769853" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.225106 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7542f5-0c08-40fc-a218-f196e7769853" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.225519 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7542f5-0c08-40fc-a218-f196e7769853" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.227164 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.229241 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.230110 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.230375 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.230638 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.233112 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr"] Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.421812 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7phqr\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.421884 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7phqr\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.421992 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j67bc\" (UniqueName: \"kubernetes.io/projected/33d023b8-6967-4bc9-813e-08892dfa7107-kube-api-access-j67bc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7phqr\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.523534 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j67bc\" (UniqueName: \"kubernetes.io/projected/33d023b8-6967-4bc9-813e-08892dfa7107-kube-api-access-j67bc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7phqr\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.523606 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7phqr\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.523662 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7phqr\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.528507 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7phqr\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.528992 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7phqr\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.540989 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j67bc\" (UniqueName: \"kubernetes.io/projected/33d023b8-6967-4bc9-813e-08892dfa7107-kube-api-access-j67bc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-7phqr\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:52 crc kubenswrapper[4851]: I0223 13:36:52.615614 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:36:53 crc kubenswrapper[4851]: I0223 13:36:53.112038 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr"] Feb 23 13:36:53 crc kubenswrapper[4851]: I0223 13:36:53.114791 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:36:53 crc kubenswrapper[4851]: I0223 13:36:53.158639 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" event={"ID":"33d023b8-6967-4bc9-813e-08892dfa7107","Type":"ContainerStarted","Data":"23db594beb0f4201a89212a9644ba82775fcca702b2a9e64b827ffee7e5f48a1"} Feb 23 13:36:54 crc kubenswrapper[4851]: I0223 13:36:54.171461 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" event={"ID":"33d023b8-6967-4bc9-813e-08892dfa7107","Type":"ContainerStarted","Data":"f0b66125fdd46b76149f182a9e5a8d56b375f2d3d6d80292437459106f31a24c"} Feb 23 13:36:54 crc kubenswrapper[4851]: I0223 13:36:54.189973 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" podStartSLOduration=1.632094499 podStartE2EDuration="2.189953304s" podCreationTimestamp="2026-02-23 13:36:52 +0000 UTC" firstStartedPulling="2026-02-23 13:36:53.114507392 +0000 UTC m=+1767.796211080" lastFinishedPulling="2026-02-23 13:36:53.672366207 +0000 UTC m=+1768.354069885" observedRunningTime="2026-02-23 13:36:54.182998217 +0000 UTC m=+1768.864701905" watchObservedRunningTime="2026-02-23 13:36:54.189953304 +0000 UTC m=+1768.871656992" Feb 23 13:36:59 crc kubenswrapper[4851]: I0223 13:36:59.211935 4851 generic.go:334] "Generic (PLEG): container finished" podID="33d023b8-6967-4bc9-813e-08892dfa7107" containerID="f0b66125fdd46b76149f182a9e5a8d56b375f2d3d6d80292437459106f31a24c" exitCode=0 Feb 23 13:36:59 crc kubenswrapper[4851]: I0223 13:36:59.212024 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" event={"ID":"33d023b8-6967-4bc9-813e-08892dfa7107","Type":"ContainerDied","Data":"f0b66125fdd46b76149f182a9e5a8d56b375f2d3d6d80292437459106f31a24c"} Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.597622 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.799862 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j67bc\" (UniqueName: \"kubernetes.io/projected/33d023b8-6967-4bc9-813e-08892dfa7107-kube-api-access-j67bc\") pod \"33d023b8-6967-4bc9-813e-08892dfa7107\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.800475 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-inventory\") pod \"33d023b8-6967-4bc9-813e-08892dfa7107\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.800585 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-ssh-key-openstack-edpm-ipam\") pod \"33d023b8-6967-4bc9-813e-08892dfa7107\" (UID: \"33d023b8-6967-4bc9-813e-08892dfa7107\") " Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.807129 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d023b8-6967-4bc9-813e-08892dfa7107-kube-api-access-j67bc" (OuterVolumeSpecName: "kube-api-access-j67bc") pod "33d023b8-6967-4bc9-813e-08892dfa7107" (UID: "33d023b8-6967-4bc9-813e-08892dfa7107"). InnerVolumeSpecName "kube-api-access-j67bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.828719 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "33d023b8-6967-4bc9-813e-08892dfa7107" (UID: "33d023b8-6967-4bc9-813e-08892dfa7107"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.831989 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-inventory" (OuterVolumeSpecName: "inventory") pod "33d023b8-6967-4bc9-813e-08892dfa7107" (UID: "33d023b8-6967-4bc9-813e-08892dfa7107"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.902498 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.902534 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j67bc\" (UniqueName: \"kubernetes.io/projected/33d023b8-6967-4bc9-813e-08892dfa7107-kube-api-access-j67bc\") on node \"crc\" DevicePath \"\"" Feb 23 13:37:00 crc kubenswrapper[4851]: I0223 13:37:00.902548 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/33d023b8-6967-4bc9-813e-08892dfa7107-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.265616 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" event={"ID":"33d023b8-6967-4bc9-813e-08892dfa7107","Type":"ContainerDied","Data":"23db594beb0f4201a89212a9644ba82775fcca702b2a9e64b827ffee7e5f48a1"} Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.265659 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23db594beb0f4201a89212a9644ba82775fcca702b2a9e64b827ffee7e5f48a1" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.265715 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-7phqr" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.321377 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh"] Feb 23 13:37:01 crc kubenswrapper[4851]: E0223 13:37:01.321924 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d023b8-6967-4bc9-813e-08892dfa7107" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.321948 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d023b8-6967-4bc9-813e-08892dfa7107" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.322286 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d023b8-6967-4bc9-813e-08892dfa7107" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.323088 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.325497 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.325487 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.325783 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.326161 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.330761 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh"] Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.513298 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rxvsh\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.513404 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27vdd\" (UniqueName: \"kubernetes.io/projected/bc81c277-18fa-44d5-8211-37e2b5ca5069-kube-api-access-27vdd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rxvsh\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.513465 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rxvsh\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.614816 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rxvsh\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.614886 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27vdd\" (UniqueName: \"kubernetes.io/projected/bc81c277-18fa-44d5-8211-37e2b5ca5069-kube-api-access-27vdd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rxvsh\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.614954 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rxvsh\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.618598 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rxvsh\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.619593 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rxvsh\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.630754 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27vdd\" (UniqueName: \"kubernetes.io/projected/bc81c277-18fa-44d5-8211-37e2b5ca5069-kube-api-access-27vdd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-rxvsh\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:01 crc kubenswrapper[4851]: I0223 13:37:01.643701 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:02 crc kubenswrapper[4851]: I0223 13:37:02.765891 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh"] Feb 23 13:37:03 crc kubenswrapper[4851]: I0223 13:37:03.284102 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" event={"ID":"bc81c277-18fa-44d5-8211-37e2b5ca5069","Type":"ContainerStarted","Data":"1a7f935a823df059bb865a0369cd935afb2cf241e8b7d78544b22f41ec65af45"} Feb 23 13:37:04 crc kubenswrapper[4851]: I0223 13:37:04.292849 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" event={"ID":"bc81c277-18fa-44d5-8211-37e2b5ca5069","Type":"ContainerStarted","Data":"b891ef1b769826a32a6d6315eed39bd72cf8c444b19db6a7f6dd5fb579766ccb"} Feb 23 13:37:04 crc kubenswrapper[4851]: I0223 13:37:04.311571 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" podStartSLOduration=2.930422834 podStartE2EDuration="3.311547212s" podCreationTimestamp="2026-02-23 13:37:01 +0000 UTC" firstStartedPulling="2026-02-23 13:37:02.766896752 +0000 UTC m=+1777.448600440" lastFinishedPulling="2026-02-23 13:37:03.14802114 +0000 UTC m=+1777.829724818" observedRunningTime="2026-02-23 13:37:04.308964289 +0000 UTC m=+1778.990667987" watchObservedRunningTime="2026-02-23 13:37:04.311547212 +0000 UTC m=+1778.993250900" Feb 23 13:37:09 crc kubenswrapper[4851]: I0223 13:37:09.036900 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-24vww"] Feb 23 13:37:09 crc kubenswrapper[4851]: I0223 13:37:09.045266 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-24vww"] Feb 23 13:37:09 crc kubenswrapper[4851]: I0223 13:37:09.979406 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322bc2f6-b9c6-4769-bc8c-fa7974459069" path="/var/lib/kubelet/pods/322bc2f6-b9c6-4769-bc8c-fa7974459069/volumes" Feb 23 13:37:11 crc kubenswrapper[4851]: I0223 13:37:11.924911 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:37:11 crc kubenswrapper[4851]: I0223 13:37:11.925234 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:37:11 crc kubenswrapper[4851]: I0223 13:37:11.925277 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:37:11 crc kubenswrapper[4851]: I0223 13:37:11.925995 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:37:11 crc kubenswrapper[4851]: I0223 13:37:11.926046 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" gracePeriod=600 Feb 23 13:37:12 crc kubenswrapper[4851]: E0223 13:37:12.049552 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:37:12 crc kubenswrapper[4851]: I0223 13:37:12.370992 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" exitCode=0 Feb 23 13:37:12 crc kubenswrapper[4851]: I0223 13:37:12.371059 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2"} Feb 23 13:37:12 crc kubenswrapper[4851]: I0223 13:37:12.371116 4851 scope.go:117] "RemoveContainer" containerID="e17ebc61652294833ea0e89a5a1e9e10432ee4605526cd8e9e75484945df4bec" Feb 23 13:37:12 crc kubenswrapper[4851]: I0223 13:37:12.372155 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:37:12 crc kubenswrapper[4851]: E0223 13:37:12.372604 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:37:18 crc kubenswrapper[4851]: I0223 13:37:18.031582 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mkz2c"] Feb 23 13:37:18 crc kubenswrapper[4851]: I0223 13:37:18.040519 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mkz2c"] Feb 23 13:37:19 crc kubenswrapper[4851]: I0223 13:37:19.979915 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601a9699-38b0-449c-9e0a-1705b5a174a4" path="/var/lib/kubelet/pods/601a9699-38b0-449c-9e0a-1705b5a174a4/volumes" Feb 23 13:37:25 crc kubenswrapper[4851]: I0223 13:37:25.036367 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8l7sd"] Feb 23 13:37:25 crc kubenswrapper[4851]: I0223 13:37:25.049460 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8l7sd"] Feb 23 13:37:25 crc kubenswrapper[4851]: I0223 13:37:25.982540 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20225786-c4f3-48e3-8719-d0710aeb3655" path="/var/lib/kubelet/pods/20225786-c4f3-48e3-8719-d0710aeb3655/volumes" Feb 23 13:37:26 crc kubenswrapper[4851]: I0223 13:37:26.968373 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:37:26 crc kubenswrapper[4851]: E0223 13:37:26.969071 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:37:30 crc kubenswrapper[4851]: I0223 13:37:30.024059 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-466sr"] Feb 23 13:37:30 crc kubenswrapper[4851]: I0223 13:37:30.033209 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-466sr"] Feb 23 13:37:31 crc kubenswrapper[4851]: I0223 13:37:31.980664 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23eaed53-2c8f-46ae-bc53-87ab7855282a" path="/var/lib/kubelet/pods/23eaed53-2c8f-46ae-bc53-87ab7855282a/volumes" Feb 23 13:37:34 crc kubenswrapper[4851]: I0223 13:37:34.584354 4851 generic.go:334] "Generic (PLEG): container finished" podID="bc81c277-18fa-44d5-8211-37e2b5ca5069" containerID="b891ef1b769826a32a6d6315eed39bd72cf8c444b19db6a7f6dd5fb579766ccb" exitCode=0 Feb 23 13:37:34 crc kubenswrapper[4851]: I0223 13:37:34.584414 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" event={"ID":"bc81c277-18fa-44d5-8211-37e2b5ca5069","Type":"ContainerDied","Data":"b891ef1b769826a32a6d6315eed39bd72cf8c444b19db6a7f6dd5fb579766ccb"} Feb 23 13:37:35 crc kubenswrapper[4851]: I0223 13:37:35.949880 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.024979 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27vdd\" (UniqueName: \"kubernetes.io/projected/bc81c277-18fa-44d5-8211-37e2b5ca5069-kube-api-access-27vdd\") pod \"bc81c277-18fa-44d5-8211-37e2b5ca5069\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.025129 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-inventory\") pod \"bc81c277-18fa-44d5-8211-37e2b5ca5069\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.025210 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-ssh-key-openstack-edpm-ipam\") pod \"bc81c277-18fa-44d5-8211-37e2b5ca5069\" (UID: \"bc81c277-18fa-44d5-8211-37e2b5ca5069\") " Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.036084 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc81c277-18fa-44d5-8211-37e2b5ca5069-kube-api-access-27vdd" (OuterVolumeSpecName: "kube-api-access-27vdd") pod "bc81c277-18fa-44d5-8211-37e2b5ca5069" (UID: "bc81c277-18fa-44d5-8211-37e2b5ca5069"). InnerVolumeSpecName "kube-api-access-27vdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.040595 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-n6qtq"] Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.051099 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-n6qtq"] Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.053756 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bc81c277-18fa-44d5-8211-37e2b5ca5069" (UID: "bc81c277-18fa-44d5-8211-37e2b5ca5069"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.055174 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-inventory" (OuterVolumeSpecName: "inventory") pod "bc81c277-18fa-44d5-8211-37e2b5ca5069" (UID: "bc81c277-18fa-44d5-8211-37e2b5ca5069"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.127482 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27vdd\" (UniqueName: \"kubernetes.io/projected/bc81c277-18fa-44d5-8211-37e2b5ca5069-kube-api-access-27vdd\") on node \"crc\" DevicePath \"\"" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.127530 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.127541 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc81c277-18fa-44d5-8211-37e2b5ca5069-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.600626 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" event={"ID":"bc81c277-18fa-44d5-8211-37e2b5ca5069","Type":"ContainerDied","Data":"1a7f935a823df059bb865a0369cd935afb2cf241e8b7d78544b22f41ec65af45"} Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.600669 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a7f935a823df059bb865a0369cd935afb2cf241e8b7d78544b22f41ec65af45" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.600710 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-rxvsh" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.690283 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws"] Feb 23 13:37:36 crc kubenswrapper[4851]: E0223 13:37:36.691144 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc81c277-18fa-44d5-8211-37e2b5ca5069" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.691166 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc81c277-18fa-44d5-8211-37e2b5ca5069" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.691428 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc81c277-18fa-44d5-8211-37e2b5ca5069" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.692202 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.694508 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.698786 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.698957 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.699174 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.703793 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws"] Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.738246 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4fdws\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.738415 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4fdws\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.738454 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt47j\" (UniqueName: \"kubernetes.io/projected/75449ea8-fea6-480f-8a8c-10d24081a76f-kube-api-access-qt47j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4fdws\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.839908 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4fdws\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.840015 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4fdws\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.840054 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt47j\" (UniqueName: \"kubernetes.io/projected/75449ea8-fea6-480f-8a8c-10d24081a76f-kube-api-access-qt47j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4fdws\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.843495 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4fdws\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.843813 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4fdws\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:36 crc kubenswrapper[4851]: I0223 13:37:36.860075 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt47j\" (UniqueName: \"kubernetes.io/projected/75449ea8-fea6-480f-8a8c-10d24081a76f-kube-api-access-qt47j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-4fdws\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:37 crc kubenswrapper[4851]: I0223 13:37:37.076129 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:37:37 crc kubenswrapper[4851]: I0223 13:37:37.549788 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws"] Feb 23 13:37:37 crc kubenswrapper[4851]: I0223 13:37:37.609494 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" event={"ID":"75449ea8-fea6-480f-8a8c-10d24081a76f","Type":"ContainerStarted","Data":"11475031b6596abc2bc448cd7fa0c8e48ce69e2f05f7b6841f084be17f938ab1"} Feb 23 13:37:37 crc kubenswrapper[4851]: I0223 13:37:37.635733 4851 scope.go:117] "RemoveContainer" containerID="05c0db6f5cb373a94b6b84602429998278895afcfbc11a4442337635af1f932e" Feb 23 13:37:37 crc kubenswrapper[4851]: I0223 13:37:37.877563 4851 scope.go:117] "RemoveContainer" containerID="bce5cc185b8b88d0a15a53cc561ccf2680545c1dfe57fde981752850e951b964" Feb 23 13:37:37 crc kubenswrapper[4851]: I0223 13:37:37.900218 4851 scope.go:117] "RemoveContainer" containerID="5718f6e2fef8f88ecee6191f90308c1e6f1cb1486ad95ab8816915b6dda426e3" Feb 23 13:37:37 crc kubenswrapper[4851]: I0223 13:37:37.946221 4851 scope.go:117] "RemoveContainer" containerID="5393f8055167ffe99ba5e48976a116e48c79832c72e4bfc51819f2c0d22161ea" Feb 23 13:37:37 crc kubenswrapper[4851]: I0223 13:37:37.979471 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c" path="/var/lib/kubelet/pods/e26fedd4-a6b4-4bbb-ad58-e64e964f1f4c/volumes" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.008568 4851 scope.go:117] "RemoveContainer" containerID="257cd63d2a8ced9088d12aef115576c1ec53c106e7724429fb32090800b6b4f1" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.112940 4851 scope.go:117] "RemoveContainer" containerID="9e6bba80d0fbbaf71bff771b064c0e300a0d85f8f12f445cee50ec5624277b3b" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.131351 4851 scope.go:117] "RemoveContainer" containerID="5882c447bf0593a3ea784c253c941de4ecbf92fa9266af3f98a34603d92b1587" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.154717 4851 scope.go:117] "RemoveContainer" containerID="76fb212e0c75524cd48fa4aea5395cfe84d36839a062cc7617f48b797b3e9289" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.173307 4851 scope.go:117] "RemoveContainer" containerID="a1044a55accbb003d6b23f2e7fa3946ae583b1836f9deb04a07b56bc5e165cd9" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.196442 4851 scope.go:117] "RemoveContainer" containerID="88db0e3e152e2e59564c5dc81e3ba47be96cb8309af6bb7315e5d46e1cb7cdb8" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.225406 4851 scope.go:117] "RemoveContainer" containerID="44f2ac5fe295c1db8ff532f5586cc517bee4de8ad62e64660e2e5d5d08fd84aa" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.256892 4851 scope.go:117] "RemoveContainer" containerID="34850fa7ee6e512ffa8a54c6231e9350a12f99890715d0787beafd198f85f558" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.618671 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" event={"ID":"75449ea8-fea6-480f-8a8c-10d24081a76f","Type":"ContainerStarted","Data":"314ecdb661edde28529e61596713dea9f3dcae2d50607740d643b60be7da609b"} Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.645916 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" podStartSLOduration=2.231350621 podStartE2EDuration="2.645895414s" podCreationTimestamp="2026-02-23 13:37:36 +0000 UTC" firstStartedPulling="2026-02-23 13:37:37.556302472 +0000 UTC m=+1812.238006150" lastFinishedPulling="2026-02-23 13:37:37.970847265 +0000 UTC m=+1812.652550943" observedRunningTime="2026-02-23 13:37:38.634391699 +0000 UTC m=+1813.316095397" watchObservedRunningTime="2026-02-23 13:37:38.645895414 +0000 UTC m=+1813.327599102" Feb 23 13:37:38 crc kubenswrapper[4851]: I0223 13:37:38.968708 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:37:38 crc kubenswrapper[4851]: E0223 13:37:38.969061 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:37:53 crc kubenswrapper[4851]: I0223 13:37:53.968957 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:37:53 crc kubenswrapper[4851]: E0223 13:37:53.969772 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:38:08 crc kubenswrapper[4851]: I0223 13:38:08.036241 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-fsh9d"] Feb 23 13:38:08 crc kubenswrapper[4851]: I0223 13:38:08.045192 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-393d-account-create-update-dqnvt"] Feb 23 13:38:08 crc kubenswrapper[4851]: I0223 13:38:08.054166 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-393d-account-create-update-dqnvt"] Feb 23 13:38:08 crc kubenswrapper[4851]: I0223 13:38:08.061231 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-fsh9d"] Feb 23 13:38:08 crc kubenswrapper[4851]: I0223 13:38:08.974047 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:38:08 crc kubenswrapper[4851]: E0223 13:38:08.974790 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.045463 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-z9xt4"] Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.053415 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-sntrg"] Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.060318 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7a31-account-create-update-j7zdq"] Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.067127 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9578-account-create-update-x8g7j"] Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.073808 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-sntrg"] Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.080704 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9578-account-create-update-x8g7j"] Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.087371 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7a31-account-create-update-j7zdq"] Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.094945 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-z9xt4"] Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.981462 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000f7332-5032-4668-8151-d5235db27f97" path="/var/lib/kubelet/pods/000f7332-5032-4668-8151-d5235db27f97/volumes" Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.982614 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d" path="/var/lib/kubelet/pods/6f0fbc1e-a3e2-4a5c-acdf-dbabc242c54d/volumes" Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.983320 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86804859-3d9d-4fd8-9b36-f74c751d795f" path="/var/lib/kubelet/pods/86804859-3d9d-4fd8-9b36-f74c751d795f/volumes" Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.984058 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94500516-6967-409c-8197-2c93f110b9e7" path="/var/lib/kubelet/pods/94500516-6967-409c-8197-2c93f110b9e7/volumes" Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.985434 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbbe43f7-4c90-40e2-8127-b8e13a3b7656" path="/var/lib/kubelet/pods/cbbe43f7-4c90-40e2-8127-b8e13a3b7656/volumes" Feb 23 13:38:09 crc kubenswrapper[4851]: I0223 13:38:09.986159 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9dbd21e-9573-4caa-83d3-ed709dc66748" path="/var/lib/kubelet/pods/e9dbd21e-9573-4caa-83d3-ed709dc66748/volumes" Feb 23 13:38:18 crc kubenswrapper[4851]: I0223 13:38:18.937619 4851 generic.go:334] "Generic (PLEG): container finished" podID="75449ea8-fea6-480f-8a8c-10d24081a76f" containerID="314ecdb661edde28529e61596713dea9f3dcae2d50607740d643b60be7da609b" exitCode=0 Feb 23 13:38:18 crc kubenswrapper[4851]: I0223 13:38:18.937701 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" event={"ID":"75449ea8-fea6-480f-8a8c-10d24081a76f","Type":"ContainerDied","Data":"314ecdb661edde28529e61596713dea9f3dcae2d50607740d643b60be7da609b"} Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.368731 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.373183 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-ssh-key-openstack-edpm-ipam\") pod \"75449ea8-fea6-480f-8a8c-10d24081a76f\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.373231 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt47j\" (UniqueName: \"kubernetes.io/projected/75449ea8-fea6-480f-8a8c-10d24081a76f-kube-api-access-qt47j\") pod \"75449ea8-fea6-480f-8a8c-10d24081a76f\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.373294 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-inventory\") pod \"75449ea8-fea6-480f-8a8c-10d24081a76f\" (UID: \"75449ea8-fea6-480f-8a8c-10d24081a76f\") " Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.379285 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75449ea8-fea6-480f-8a8c-10d24081a76f-kube-api-access-qt47j" (OuterVolumeSpecName: "kube-api-access-qt47j") pod "75449ea8-fea6-480f-8a8c-10d24081a76f" (UID: "75449ea8-fea6-480f-8a8c-10d24081a76f"). InnerVolumeSpecName "kube-api-access-qt47j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.417305 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "75449ea8-fea6-480f-8a8c-10d24081a76f" (UID: "75449ea8-fea6-480f-8a8c-10d24081a76f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.420128 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-inventory" (OuterVolumeSpecName: "inventory") pod "75449ea8-fea6-480f-8a8c-10d24081a76f" (UID: "75449ea8-fea6-480f-8a8c-10d24081a76f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.475981 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt47j\" (UniqueName: \"kubernetes.io/projected/75449ea8-fea6-480f-8a8c-10d24081a76f-kube-api-access-qt47j\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.476018 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.476031 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75449ea8-fea6-480f-8a8c-10d24081a76f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.959310 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" event={"ID":"75449ea8-fea6-480f-8a8c-10d24081a76f","Type":"ContainerDied","Data":"11475031b6596abc2bc448cd7fa0c8e48ce69e2f05f7b6841f084be17f938ab1"} Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.959789 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11475031b6596abc2bc448cd7fa0c8e48ce69e2f05f7b6841f084be17f938ab1" Feb 23 13:38:20 crc kubenswrapper[4851]: I0223 13:38:20.959415 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-4fdws" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.052465 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-thkvh"] Feb 23 13:38:21 crc kubenswrapper[4851]: E0223 13:38:21.052874 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75449ea8-fea6-480f-8a8c-10d24081a76f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.052894 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="75449ea8-fea6-480f-8a8c-10d24081a76f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.053148 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="75449ea8-fea6-480f-8a8c-10d24081a76f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.053839 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.057929 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.058002 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.058192 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.057946 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.079814 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-thkvh"] Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.090020 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-thkvh\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.090065 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp6lh\" (UniqueName: \"kubernetes.io/projected/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-kube-api-access-rp6lh\") pod \"ssh-known-hosts-edpm-deployment-thkvh\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.090087 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-thkvh\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.191092 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-thkvh\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.191139 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp6lh\" (UniqueName: \"kubernetes.io/projected/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-kube-api-access-rp6lh\") pod \"ssh-known-hosts-edpm-deployment-thkvh\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.191161 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-thkvh\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.196397 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-thkvh\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.201726 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-thkvh\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.211012 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp6lh\" (UniqueName: \"kubernetes.io/projected/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-kube-api-access-rp6lh\") pod \"ssh-known-hosts-edpm-deployment-thkvh\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.376322 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.915003 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-thkvh"] Feb 23 13:38:21 crc kubenswrapper[4851]: I0223 13:38:21.977290 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" event={"ID":"6d0c05be-7530-42cf-86e9-e0d67e24ce4d","Type":"ContainerStarted","Data":"807f2428099fa0852af08adc5a1c58f2192d74fc7bb4422022c5f96fe94941cc"} Feb 23 13:38:22 crc kubenswrapper[4851]: I0223 13:38:22.971815 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:38:22 crc kubenswrapper[4851]: E0223 13:38:22.972493 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:38:22 crc kubenswrapper[4851]: I0223 13:38:22.975181 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" event={"ID":"6d0c05be-7530-42cf-86e9-e0d67e24ce4d","Type":"ContainerStarted","Data":"fac797ec2214f471085e8ee6423b35b34774bc4e5b845bc9d93c7d0eafe202e3"} Feb 23 13:38:22 crc kubenswrapper[4851]: I0223 13:38:22.992383 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" podStartSLOduration=1.6018043789999998 podStartE2EDuration="1.992362295s" podCreationTimestamp="2026-02-23 13:38:21 +0000 UTC" firstStartedPulling="2026-02-23 13:38:21.915652871 +0000 UTC m=+1856.597356539" lastFinishedPulling="2026-02-23 13:38:22.306210777 +0000 UTC m=+1856.987914455" observedRunningTime="2026-02-23 13:38:22.98971158 +0000 UTC m=+1857.671415268" watchObservedRunningTime="2026-02-23 13:38:22.992362295 +0000 UTC m=+1857.674065983" Feb 23 13:38:29 crc kubenswrapper[4851]: I0223 13:38:29.025529 4851 generic.go:334] "Generic (PLEG): container finished" podID="6d0c05be-7530-42cf-86e9-e0d67e24ce4d" containerID="fac797ec2214f471085e8ee6423b35b34774bc4e5b845bc9d93c7d0eafe202e3" exitCode=0 Feb 23 13:38:29 crc kubenswrapper[4851]: I0223 13:38:29.025658 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" event={"ID":"6d0c05be-7530-42cf-86e9-e0d67e24ce4d","Type":"ContainerDied","Data":"fac797ec2214f471085e8ee6423b35b34774bc4e5b845bc9d93c7d0eafe202e3"} Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.508860 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.671179 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-ssh-key-openstack-edpm-ipam\") pod \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.671434 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-inventory-0\") pod \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.671528 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp6lh\" (UniqueName: \"kubernetes.io/projected/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-kube-api-access-rp6lh\") pod \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\" (UID: \"6d0c05be-7530-42cf-86e9-e0d67e24ce4d\") " Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.676649 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-kube-api-access-rp6lh" (OuterVolumeSpecName: "kube-api-access-rp6lh") pod "6d0c05be-7530-42cf-86e9-e0d67e24ce4d" (UID: "6d0c05be-7530-42cf-86e9-e0d67e24ce4d"). InnerVolumeSpecName "kube-api-access-rp6lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.709487 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6d0c05be-7530-42cf-86e9-e0d67e24ce4d" (UID: "6d0c05be-7530-42cf-86e9-e0d67e24ce4d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.716116 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6d0c05be-7530-42cf-86e9-e0d67e24ce4d" (UID: "6d0c05be-7530-42cf-86e9-e0d67e24ce4d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.773413 4851 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.773727 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp6lh\" (UniqueName: \"kubernetes.io/projected/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-kube-api-access-rp6lh\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:30 crc kubenswrapper[4851]: I0223 13:38:30.773739 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d0c05be-7530-42cf-86e9-e0d67e24ce4d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.040431 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" event={"ID":"6d0c05be-7530-42cf-86e9-e0d67e24ce4d","Type":"ContainerDied","Data":"807f2428099fa0852af08adc5a1c58f2192d74fc7bb4422022c5f96fe94941cc"} Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.040484 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="807f2428099fa0852af08adc5a1c58f2192d74fc7bb4422022c5f96fe94941cc" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.040541 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-thkvh" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.114959 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6"] Feb 23 13:38:31 crc kubenswrapper[4851]: E0223 13:38:31.115371 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0c05be-7530-42cf-86e9-e0d67e24ce4d" containerName="ssh-known-hosts-edpm-deployment" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.115390 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0c05be-7530-42cf-86e9-e0d67e24ce4d" containerName="ssh-known-hosts-edpm-deployment" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.115608 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0c05be-7530-42cf-86e9-e0d67e24ce4d" containerName="ssh-known-hosts-edpm-deployment" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.116170 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.117910 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.118250 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.118423 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.119042 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.134496 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6"] Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.180739 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd2p6\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.180804 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkd8z\" (UniqueName: \"kubernetes.io/projected/3a4f0b71-7653-49fa-9155-3e0d4197e087-kube-api-access-rkd8z\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd2p6\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.180926 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd2p6\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.282244 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkd8z\" (UniqueName: \"kubernetes.io/projected/3a4f0b71-7653-49fa-9155-3e0d4197e087-kube-api-access-rkd8z\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd2p6\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.282505 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd2p6\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.282675 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd2p6\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.286899 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd2p6\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.289510 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd2p6\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.306094 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkd8z\" (UniqueName: \"kubernetes.io/projected/3a4f0b71-7653-49fa-9155-3e0d4197e087-kube-api-access-rkd8z\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kd2p6\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.433162 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:31 crc kubenswrapper[4851]: I0223 13:38:31.930312 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6"] Feb 23 13:38:32 crc kubenswrapper[4851]: I0223 13:38:32.049847 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" event={"ID":"3a4f0b71-7653-49fa-9155-3e0d4197e087","Type":"ContainerStarted","Data":"7bd3e585385974090a160d7e32a3c5b4f3be82d6e64862e22ab8c09ae680d51c"} Feb 23 13:38:33 crc kubenswrapper[4851]: I0223 13:38:33.059585 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" event={"ID":"3a4f0b71-7653-49fa-9155-3e0d4197e087","Type":"ContainerStarted","Data":"866b71709090b3ff897d1b9c15335e8bba00d9127f022b47c6b74a639a926673"} Feb 23 13:38:33 crc kubenswrapper[4851]: I0223 13:38:33.083738 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" podStartSLOduration=1.6937022590000002 podStartE2EDuration="2.08372166s" podCreationTimestamp="2026-02-23 13:38:31 +0000 UTC" firstStartedPulling="2026-02-23 13:38:31.938471668 +0000 UTC m=+1866.620175346" lastFinishedPulling="2026-02-23 13:38:32.328491049 +0000 UTC m=+1867.010194747" observedRunningTime="2026-02-23 13:38:33.079809599 +0000 UTC m=+1867.761513297" watchObservedRunningTime="2026-02-23 13:38:33.08372166 +0000 UTC m=+1867.765425328" Feb 23 13:38:36 crc kubenswrapper[4851]: I0223 13:38:36.968543 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:38:36 crc kubenswrapper[4851]: E0223 13:38:36.969387 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:38:37 crc kubenswrapper[4851]: I0223 13:38:37.048884 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-96s8v"] Feb 23 13:38:37 crc kubenswrapper[4851]: I0223 13:38:37.059573 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-96s8v"] Feb 23 13:38:37 crc kubenswrapper[4851]: I0223 13:38:37.978969 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8cf62f0-5eae-44f2-b264-6e6449fd7ff5" path="/var/lib/kubelet/pods/a8cf62f0-5eae-44f2-b264-6e6449fd7ff5/volumes" Feb 23 13:38:38 crc kubenswrapper[4851]: I0223 13:38:38.469091 4851 scope.go:117] "RemoveContainer" containerID="0430524c47c7b3c26a76a228d3e69f0a775dce6089fd1c1eec597a8b0101390f" Feb 23 13:38:38 crc kubenswrapper[4851]: I0223 13:38:38.487417 4851 scope.go:117] "RemoveContainer" containerID="c3552dad3047de770dc11a9668914cf9db582265ccd8d0dcb50050246e81d454" Feb 23 13:38:38 crc kubenswrapper[4851]: I0223 13:38:38.539240 4851 scope.go:117] "RemoveContainer" containerID="24a966637d389bf4855d40a33292acd850524301398839949486e23ad5e8f718" Feb 23 13:38:38 crc kubenswrapper[4851]: I0223 13:38:38.611528 4851 scope.go:117] "RemoveContainer" containerID="b72bd1ccb44177c2da0a0e88fbd4bae054a1a192761e5ee13c464f8cfd2a8ee5" Feb 23 13:38:38 crc kubenswrapper[4851]: I0223 13:38:38.645079 4851 scope.go:117] "RemoveContainer" containerID="4a04b9fce1974cfe25418462f7cefe5e703ac87a53dde466b3df87a3bdbb1cfe" Feb 23 13:38:38 crc kubenswrapper[4851]: I0223 13:38:38.686232 4851 scope.go:117] "RemoveContainer" containerID="93bedb3285bebb32ce651d3cbb9c27d569622a9d414168d437e3a43b54c8253a" Feb 23 13:38:38 crc kubenswrapper[4851]: I0223 13:38:38.721894 4851 scope.go:117] "RemoveContainer" containerID="1be81a45d8222334125bc708b571d676e774b72b73b12dba378b576eb022a89f" Feb 23 13:38:39 crc kubenswrapper[4851]: I0223 13:38:39.104779 4851 generic.go:334] "Generic (PLEG): container finished" podID="3a4f0b71-7653-49fa-9155-3e0d4197e087" containerID="866b71709090b3ff897d1b9c15335e8bba00d9127f022b47c6b74a639a926673" exitCode=0 Feb 23 13:38:39 crc kubenswrapper[4851]: I0223 13:38:39.104844 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" event={"ID":"3a4f0b71-7653-49fa-9155-3e0d4197e087","Type":"ContainerDied","Data":"866b71709090b3ff897d1b9c15335e8bba00d9127f022b47c6b74a639a926673"} Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.489358 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.576558 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-ssh-key-openstack-edpm-ipam\") pod \"3a4f0b71-7653-49fa-9155-3e0d4197e087\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.576636 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkd8z\" (UniqueName: \"kubernetes.io/projected/3a4f0b71-7653-49fa-9155-3e0d4197e087-kube-api-access-rkd8z\") pod \"3a4f0b71-7653-49fa-9155-3e0d4197e087\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.576709 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-inventory\") pod \"3a4f0b71-7653-49fa-9155-3e0d4197e087\" (UID: \"3a4f0b71-7653-49fa-9155-3e0d4197e087\") " Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.587516 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4f0b71-7653-49fa-9155-3e0d4197e087-kube-api-access-rkd8z" (OuterVolumeSpecName: "kube-api-access-rkd8z") pod "3a4f0b71-7653-49fa-9155-3e0d4197e087" (UID: "3a4f0b71-7653-49fa-9155-3e0d4197e087"). InnerVolumeSpecName "kube-api-access-rkd8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.602254 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-inventory" (OuterVolumeSpecName: "inventory") pod "3a4f0b71-7653-49fa-9155-3e0d4197e087" (UID: "3a4f0b71-7653-49fa-9155-3e0d4197e087"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.615577 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3a4f0b71-7653-49fa-9155-3e0d4197e087" (UID: "3a4f0b71-7653-49fa-9155-3e0d4197e087"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.677674 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkd8z\" (UniqueName: \"kubernetes.io/projected/3a4f0b71-7653-49fa-9155-3e0d4197e087-kube-api-access-rkd8z\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.677701 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:40 crc kubenswrapper[4851]: I0223 13:38:40.677710 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3a4f0b71-7653-49fa-9155-3e0d4197e087-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.121000 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" event={"ID":"3a4f0b71-7653-49fa-9155-3e0d4197e087","Type":"ContainerDied","Data":"7bd3e585385974090a160d7e32a3c5b4f3be82d6e64862e22ab8c09ae680d51c"} Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.121200 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bd3e585385974090a160d7e32a3c5b4f3be82d6e64862e22ab8c09ae680d51c" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.121058 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kd2p6" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.195211 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm"] Feb 23 13:38:41 crc kubenswrapper[4851]: E0223 13:38:41.195768 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4f0b71-7653-49fa-9155-3e0d4197e087" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.195789 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4f0b71-7653-49fa-9155-3e0d4197e087" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.195998 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4f0b71-7653-49fa-9155-3e0d4197e087" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.196874 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.204723 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.204822 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.204992 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.205257 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.206652 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm"] Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.287930 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.288021 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4rsb\" (UniqueName: \"kubernetes.io/projected/964cf639-e7d3-402e-80f9-d8d27ebf5db7-kube-api-access-f4rsb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.288190 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.388945 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.389007 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.389078 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4rsb\" (UniqueName: \"kubernetes.io/projected/964cf639-e7d3-402e-80f9-d8d27ebf5db7-kube-api-access-f4rsb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.396027 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.396762 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.404017 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4rsb\" (UniqueName: \"kubernetes.io/projected/964cf639-e7d3-402e-80f9-d8d27ebf5db7-kube-api-access-f4rsb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:41 crc kubenswrapper[4851]: I0223 13:38:41.523431 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:42 crc kubenswrapper[4851]: I0223 13:38:42.042759 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm"] Feb 23 13:38:42 crc kubenswrapper[4851]: I0223 13:38:42.128827 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" event={"ID":"964cf639-e7d3-402e-80f9-d8d27ebf5db7","Type":"ContainerStarted","Data":"f4702c46f60059e74827e32cf62a9d8f4a3161348985809161c8c6483d61c570"} Feb 23 13:38:43 crc kubenswrapper[4851]: I0223 13:38:43.138901 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" event={"ID":"964cf639-e7d3-402e-80f9-d8d27ebf5db7","Type":"ContainerStarted","Data":"7507b6637977ef27ea80e046719154abdaa1ce594717ddd735f768043481decc"} Feb 23 13:38:43 crc kubenswrapper[4851]: I0223 13:38:43.165678 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" podStartSLOduration=1.708639284 podStartE2EDuration="2.165657468s" podCreationTimestamp="2026-02-23 13:38:41 +0000 UTC" firstStartedPulling="2026-02-23 13:38:42.057951727 +0000 UTC m=+1876.739655405" lastFinishedPulling="2026-02-23 13:38:42.514969911 +0000 UTC m=+1877.196673589" observedRunningTime="2026-02-23 13:38:43.154228795 +0000 UTC m=+1877.835932483" watchObservedRunningTime="2026-02-23 13:38:43.165657468 +0000 UTC m=+1877.847361156" Feb 23 13:38:51 crc kubenswrapper[4851]: I0223 13:38:51.206647 4851 generic.go:334] "Generic (PLEG): container finished" podID="964cf639-e7d3-402e-80f9-d8d27ebf5db7" containerID="7507b6637977ef27ea80e046719154abdaa1ce594717ddd735f768043481decc" exitCode=0 Feb 23 13:38:51 crc kubenswrapper[4851]: I0223 13:38:51.207190 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" event={"ID":"964cf639-e7d3-402e-80f9-d8d27ebf5db7","Type":"ContainerDied","Data":"7507b6637977ef27ea80e046719154abdaa1ce594717ddd735f768043481decc"} Feb 23 13:38:51 crc kubenswrapper[4851]: I0223 13:38:51.968807 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:38:51 crc kubenswrapper[4851]: E0223 13:38:51.969052 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.567447 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.608921 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-inventory\") pod \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.609032 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-ssh-key-openstack-edpm-ipam\") pod \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.609065 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4rsb\" (UniqueName: \"kubernetes.io/projected/964cf639-e7d3-402e-80f9-d8d27ebf5db7-kube-api-access-f4rsb\") pod \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\" (UID: \"964cf639-e7d3-402e-80f9-d8d27ebf5db7\") " Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.614014 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964cf639-e7d3-402e-80f9-d8d27ebf5db7-kube-api-access-f4rsb" (OuterVolumeSpecName: "kube-api-access-f4rsb") pod "964cf639-e7d3-402e-80f9-d8d27ebf5db7" (UID: "964cf639-e7d3-402e-80f9-d8d27ebf5db7"). InnerVolumeSpecName "kube-api-access-f4rsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.635238 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "964cf639-e7d3-402e-80f9-d8d27ebf5db7" (UID: "964cf639-e7d3-402e-80f9-d8d27ebf5db7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.636011 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-inventory" (OuterVolumeSpecName: "inventory") pod "964cf639-e7d3-402e-80f9-d8d27ebf5db7" (UID: "964cf639-e7d3-402e-80f9-d8d27ebf5db7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.711665 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.711837 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964cf639-e7d3-402e-80f9-d8d27ebf5db7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:52 crc kubenswrapper[4851]: I0223 13:38:52.711944 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4rsb\" (UniqueName: \"kubernetes.io/projected/964cf639-e7d3-402e-80f9-d8d27ebf5db7-kube-api-access-f4rsb\") on node \"crc\" DevicePath \"\"" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.224876 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" event={"ID":"964cf639-e7d3-402e-80f9-d8d27ebf5db7","Type":"ContainerDied","Data":"f4702c46f60059e74827e32cf62a9d8f4a3161348985809161c8c6483d61c570"} Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.225212 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4702c46f60059e74827e32cf62a9d8f4a3161348985809161c8c6483d61c570" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.224998 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.295996 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh"] Feb 23 13:38:53 crc kubenswrapper[4851]: E0223 13:38:53.296415 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964cf639-e7d3-402e-80f9-d8d27ebf5db7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.296439 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="964cf639-e7d3-402e-80f9-d8d27ebf5db7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.296665 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="964cf639-e7d3-402e-80f9-d8d27ebf5db7" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.297352 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.302475 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.302872 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.303154 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.303404 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.304420 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.304577 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.304605 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.306172 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.309171 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh"] Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.324641 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.324683 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.324850 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.324940 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.325128 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.325195 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.325277 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.325392 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.325466 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.325618 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m64pz\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-kube-api-access-m64pz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.325718 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.325792 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.325893 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.326025 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428368 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428432 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428476 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428512 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428580 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428607 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428645 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428690 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428725 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428757 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m64pz\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-kube-api-access-m64pz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428790 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428813 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428837 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.428885 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.433040 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.433197 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.433713 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.433715 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.435408 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.435471 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.435426 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.435781 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.436147 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.436157 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.437153 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.438111 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.438861 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.447216 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m64pz\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-kube-api-access-m64pz\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:53 crc kubenswrapper[4851]: I0223 13:38:53.615981 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:38:54 crc kubenswrapper[4851]: I0223 13:38:54.110006 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh"] Feb 23 13:38:54 crc kubenswrapper[4851]: I0223 13:38:54.234044 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" event={"ID":"fde15470-10ed-44ef-8ba7-a03c9046f828","Type":"ContainerStarted","Data":"a72d00b52f416b9792afd4cfcdca8739f6b6819674cb67b9ce0b0b34b5013673"} Feb 23 13:38:56 crc kubenswrapper[4851]: I0223 13:38:56.038040 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l76c"] Feb 23 13:38:56 crc kubenswrapper[4851]: I0223 13:38:56.046236 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cz56r"] Feb 23 13:38:56 crc kubenswrapper[4851]: I0223 13:38:56.053417 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cz56r"] Feb 23 13:38:56 crc kubenswrapper[4851]: I0223 13:38:56.060945 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7l76c"] Feb 23 13:38:56 crc kubenswrapper[4851]: I0223 13:38:56.273636 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" event={"ID":"fde15470-10ed-44ef-8ba7-a03c9046f828","Type":"ContainerStarted","Data":"5501e30c31fd7ae982fe9b1bbf9c6622ccb09dc54f29e5fc69f75c31a125f713"} Feb 23 13:38:56 crc kubenswrapper[4851]: I0223 13:38:56.294638 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" podStartSLOduration=1.774937789 podStartE2EDuration="3.294620173s" podCreationTimestamp="2026-02-23 13:38:53 +0000 UTC" firstStartedPulling="2026-02-23 13:38:54.115066766 +0000 UTC m=+1888.796770434" lastFinishedPulling="2026-02-23 13:38:55.63474914 +0000 UTC m=+1890.316452818" observedRunningTime="2026-02-23 13:38:56.289038285 +0000 UTC m=+1890.970741963" watchObservedRunningTime="2026-02-23 13:38:56.294620173 +0000 UTC m=+1890.976323861" Feb 23 13:38:57 crc kubenswrapper[4851]: I0223 13:38:57.978491 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f4e3d1b-bc68-4384-9d6d-b4712e543629" path="/var/lib/kubelet/pods/0f4e3d1b-bc68-4384-9d6d-b4712e543629/volumes" Feb 23 13:38:57 crc kubenswrapper[4851]: I0223 13:38:57.979419 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4e808d-2f6c-4882-8cf1-32bd909b3d5c" path="/var/lib/kubelet/pods/6c4e808d-2f6c-4882-8cf1-32bd909b3d5c/volumes" Feb 23 13:39:05 crc kubenswrapper[4851]: I0223 13:39:05.974280 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:39:05 crc kubenswrapper[4851]: E0223 13:39:05.974982 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:39:18 crc kubenswrapper[4851]: I0223 13:39:18.969242 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:39:18 crc kubenswrapper[4851]: E0223 13:39:18.970040 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:39:28 crc kubenswrapper[4851]: I0223 13:39:28.547252 4851 generic.go:334] "Generic (PLEG): container finished" podID="fde15470-10ed-44ef-8ba7-a03c9046f828" containerID="5501e30c31fd7ae982fe9b1bbf9c6622ccb09dc54f29e5fc69f75c31a125f713" exitCode=0 Feb 23 13:39:28 crc kubenswrapper[4851]: I0223 13:39:28.547451 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" event={"ID":"fde15470-10ed-44ef-8ba7-a03c9046f828","Type":"ContainerDied","Data":"5501e30c31fd7ae982fe9b1bbf9c6622ccb09dc54f29e5fc69f75c31a125f713"} Feb 23 13:39:29 crc kubenswrapper[4851]: I0223 13:39:29.912782 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034037 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-inventory\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034115 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034204 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-nova-combined-ca-bundle\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034241 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ssh-key-openstack-edpm-ipam\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034289 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-ovn-default-certs-0\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034319 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-neutron-metadata-combined-ca-bundle\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034379 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034415 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-repo-setup-combined-ca-bundle\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034456 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034532 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-bootstrap-combined-ca-bundle\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034562 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m64pz\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-kube-api-access-m64pz\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034591 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-telemetry-combined-ca-bundle\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034624 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-libvirt-combined-ca-bundle\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.034697 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ovn-combined-ca-bundle\") pod \"fde15470-10ed-44ef-8ba7-a03c9046f828\" (UID: \"fde15470-10ed-44ef-8ba7-a03c9046f828\") " Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.040863 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.041066 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.042194 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.042350 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.042344 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.042420 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.042443 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.042825 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-kube-api-access-m64pz" (OuterVolumeSpecName: "kube-api-access-m64pz") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "kube-api-access-m64pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.043263 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.043851 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.043915 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.044718 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.063933 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-inventory" (OuterVolumeSpecName: "inventory") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.066550 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fde15470-10ed-44ef-8ba7-a03c9046f828" (UID: "fde15470-10ed-44ef-8ba7-a03c9046f828"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137658 4851 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137703 4851 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137721 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m64pz\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-kube-api-access-m64pz\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137734 4851 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137746 4851 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137760 4851 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137774 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137786 4851 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137800 4851 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137812 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137824 4851 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137836 4851 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137851 4851 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fde15470-10ed-44ef-8ba7-a03c9046f828-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.137863 4851 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde15470-10ed-44ef-8ba7-a03c9046f828-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.566539 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" event={"ID":"fde15470-10ed-44ef-8ba7-a03c9046f828","Type":"ContainerDied","Data":"a72d00b52f416b9792afd4cfcdca8739f6b6819674cb67b9ce0b0b34b5013673"} Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.566603 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a72d00b52f416b9792afd4cfcdca8739f6b6819674cb67b9ce0b0b34b5013673" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.566639 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.664728 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq"] Feb 23 13:39:30 crc kubenswrapper[4851]: E0223 13:39:30.665208 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde15470-10ed-44ef-8ba7-a03c9046f828" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.665235 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde15470-10ed-44ef-8ba7-a03c9046f828" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.665466 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde15470-10ed-44ef-8ba7-a03c9046f828" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.666269 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.671592 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.671840 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.671983 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.672313 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.672467 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.684949 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq"] Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.750778 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4j4z\" (UniqueName: \"kubernetes.io/projected/b70c39f9-b146-4980-bb34-0034ed5b8b86-kube-api-access-b4j4z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.750879 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.750949 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.751014 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.751137 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.852184 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4j4z\" (UniqueName: \"kubernetes.io/projected/b70c39f9-b146-4980-bb34-0034ed5b8b86-kube-api-access-b4j4z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.852501 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.852593 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.852653 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.852757 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.853691 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.865396 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.865817 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.867319 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.867802 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4j4z\" (UniqueName: \"kubernetes.io/projected/b70c39f9-b146-4980-bb34-0034ed5b8b86-kube-api-access-b4j4z\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-5frpq\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.968852 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:39:30 crc kubenswrapper[4851]: E0223 13:39:30.969076 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:39:30 crc kubenswrapper[4851]: I0223 13:39:30.993276 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:39:31 crc kubenswrapper[4851]: I0223 13:39:31.499660 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq"] Feb 23 13:39:31 crc kubenswrapper[4851]: I0223 13:39:31.575377 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" event={"ID":"b70c39f9-b146-4980-bb34-0034ed5b8b86","Type":"ContainerStarted","Data":"fe1eeebdeee51628dcb2c67e7fc53eccac86d4e1b65827fb9858f10526775406"} Feb 23 13:39:32 crc kubenswrapper[4851]: I0223 13:39:32.585007 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" event={"ID":"b70c39f9-b146-4980-bb34-0034ed5b8b86","Type":"ContainerStarted","Data":"10382d6539d37e70db653476614f183bfb11f974c82a9040b335c7c01c2c1897"} Feb 23 13:39:32 crc kubenswrapper[4851]: I0223 13:39:32.601920 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" podStartSLOduration=2.129333368 podStartE2EDuration="2.601807096s" podCreationTimestamp="2026-02-23 13:39:30 +0000 UTC" firstStartedPulling="2026-02-23 13:39:31.506579563 +0000 UTC m=+1926.188283251" lastFinishedPulling="2026-02-23 13:39:31.979053301 +0000 UTC m=+1926.660756979" observedRunningTime="2026-02-23 13:39:32.601157107 +0000 UTC m=+1927.282860805" watchObservedRunningTime="2026-02-23 13:39:32.601807096 +0000 UTC m=+1927.283510764" Feb 23 13:39:38 crc kubenswrapper[4851]: I0223 13:39:38.863661 4851 scope.go:117] "RemoveContainer" containerID="db533a071b07e5501e6b9d85464a687c9c14af8b9d432f34f718d23fa665b593" Feb 23 13:39:38 crc kubenswrapper[4851]: I0223 13:39:38.908026 4851 scope.go:117] "RemoveContainer" containerID="7975ab1c739789a9c60304386c65786134384f35bdb60bc7a91c769d71037b49" Feb 23 13:39:40 crc kubenswrapper[4851]: I0223 13:39:40.042934 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sfpzs"] Feb 23 13:39:40 crc kubenswrapper[4851]: I0223 13:39:40.053053 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sfpzs"] Feb 23 13:39:41 crc kubenswrapper[4851]: I0223 13:39:41.979771 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f8848be-a603-4de9-9834-05c24e156662" path="/var/lib/kubelet/pods/9f8848be-a603-4de9-9834-05c24e156662/volumes" Feb 23 13:39:42 crc kubenswrapper[4851]: I0223 13:39:42.969373 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:39:42 crc kubenswrapper[4851]: E0223 13:39:42.969928 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:39:57 crc kubenswrapper[4851]: I0223 13:39:57.968967 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:39:57 crc kubenswrapper[4851]: E0223 13:39:57.969764 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:40:09 crc kubenswrapper[4851]: I0223 13:40:09.968902 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:40:09 crc kubenswrapper[4851]: E0223 13:40:09.969628 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:40:21 crc kubenswrapper[4851]: I0223 13:40:21.969065 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:40:21 crc kubenswrapper[4851]: E0223 13:40:21.970161 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:40:26 crc kubenswrapper[4851]: I0223 13:40:26.095958 4851 generic.go:334] "Generic (PLEG): container finished" podID="b70c39f9-b146-4980-bb34-0034ed5b8b86" containerID="10382d6539d37e70db653476614f183bfb11f974c82a9040b335c7c01c2c1897" exitCode=0 Feb 23 13:40:26 crc kubenswrapper[4851]: I0223 13:40:26.096030 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" event={"ID":"b70c39f9-b146-4980-bb34-0034ed5b8b86","Type":"ContainerDied","Data":"10382d6539d37e70db653476614f183bfb11f974c82a9040b335c7c01c2c1897"} Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.425981 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.571290 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ssh-key-openstack-edpm-ipam\") pod \"b70c39f9-b146-4980-bb34-0034ed5b8b86\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.571366 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovn-combined-ca-bundle\") pod \"b70c39f9-b146-4980-bb34-0034ed5b8b86\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.571494 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovncontroller-config-0\") pod \"b70c39f9-b146-4980-bb34-0034ed5b8b86\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.571520 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-inventory\") pod \"b70c39f9-b146-4980-bb34-0034ed5b8b86\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.571669 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4j4z\" (UniqueName: \"kubernetes.io/projected/b70c39f9-b146-4980-bb34-0034ed5b8b86-kube-api-access-b4j4z\") pod \"b70c39f9-b146-4980-bb34-0034ed5b8b86\" (UID: \"b70c39f9-b146-4980-bb34-0034ed5b8b86\") " Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.578116 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b70c39f9-b146-4980-bb34-0034ed5b8b86" (UID: "b70c39f9-b146-4980-bb34-0034ed5b8b86"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.578178 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70c39f9-b146-4980-bb34-0034ed5b8b86-kube-api-access-b4j4z" (OuterVolumeSpecName: "kube-api-access-b4j4z") pod "b70c39f9-b146-4980-bb34-0034ed5b8b86" (UID: "b70c39f9-b146-4980-bb34-0034ed5b8b86"). InnerVolumeSpecName "kube-api-access-b4j4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.596982 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b70c39f9-b146-4980-bb34-0034ed5b8b86" (UID: "b70c39f9-b146-4980-bb34-0034ed5b8b86"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.598265 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b70c39f9-b146-4980-bb34-0034ed5b8b86" (UID: "b70c39f9-b146-4980-bb34-0034ed5b8b86"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.600290 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-inventory" (OuterVolumeSpecName: "inventory") pod "b70c39f9-b146-4980-bb34-0034ed5b8b86" (UID: "b70c39f9-b146-4980-bb34-0034ed5b8b86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.673524 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.673564 4851 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.673577 4851 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b70c39f9-b146-4980-bb34-0034ed5b8b86-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.673589 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70c39f9-b146-4980-bb34-0034ed5b8b86-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:40:27 crc kubenswrapper[4851]: I0223 13:40:27.673601 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4j4z\" (UniqueName: \"kubernetes.io/projected/b70c39f9-b146-4980-bb34-0034ed5b8b86-kube-api-access-b4j4z\") on node \"crc\" DevicePath \"\"" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.112830 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" event={"ID":"b70c39f9-b146-4980-bb34-0034ed5b8b86","Type":"ContainerDied","Data":"fe1eeebdeee51628dcb2c67e7fc53eccac86d4e1b65827fb9858f10526775406"} Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.112871 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe1eeebdeee51628dcb2c67e7fc53eccac86d4e1b65827fb9858f10526775406" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.112960 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-5frpq" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.196538 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v"] Feb 23 13:40:28 crc kubenswrapper[4851]: E0223 13:40:28.196872 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70c39f9-b146-4980-bb34-0034ed5b8b86" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.196891 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70c39f9-b146-4980-bb34-0034ed5b8b86" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.197169 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70c39f9-b146-4980-bb34-0034ed5b8b86" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.197981 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.201967 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.202314 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.202402 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.202430 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.202725 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.204131 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.207125 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v"] Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.389436 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.389521 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkhfh\" (UniqueName: \"kubernetes.io/projected/01be1f4b-d5f3-4dbe-b528-118617cdad1e-kube-api-access-jkhfh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.389553 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.389598 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.389648 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.389693 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.491665 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.491725 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.491818 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.491854 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkhfh\" (UniqueName: \"kubernetes.io/projected/01be1f4b-d5f3-4dbe-b528-118617cdad1e-kube-api-access-jkhfh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.491875 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.491919 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.495889 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.496342 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.499925 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.500026 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.500094 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.509735 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkhfh\" (UniqueName: \"kubernetes.io/projected/01be1f4b-d5f3-4dbe-b528-118617cdad1e-kube-api-access-jkhfh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:28 crc kubenswrapper[4851]: I0223 13:40:28.526720 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:40:29 crc kubenswrapper[4851]: I0223 13:40:29.015578 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v"] Feb 23 13:40:29 crc kubenswrapper[4851]: I0223 13:40:29.123489 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" event={"ID":"01be1f4b-d5f3-4dbe-b528-118617cdad1e","Type":"ContainerStarted","Data":"603fff877405c39ee3f080396146e31c7e7d5b22be050e04feefabc30fc95b34"} Feb 23 13:40:30 crc kubenswrapper[4851]: I0223 13:40:30.132062 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" event={"ID":"01be1f4b-d5f3-4dbe-b528-118617cdad1e","Type":"ContainerStarted","Data":"9f2055a6e528e7796f30d1177f3a2553d1f81f25762e0b726827c2967f70eb74"} Feb 23 13:40:30 crc kubenswrapper[4851]: I0223 13:40:30.151101 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" podStartSLOduration=1.443122746 podStartE2EDuration="2.151084208s" podCreationTimestamp="2026-02-23 13:40:28 +0000 UTC" firstStartedPulling="2026-02-23 13:40:29.019994971 +0000 UTC m=+1983.701698649" lastFinishedPulling="2026-02-23 13:40:29.727956433 +0000 UTC m=+1984.409660111" observedRunningTime="2026-02-23 13:40:30.146157898 +0000 UTC m=+1984.827861576" watchObservedRunningTime="2026-02-23 13:40:30.151084208 +0000 UTC m=+1984.832787886" Feb 23 13:40:34 crc kubenswrapper[4851]: I0223 13:40:34.968732 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:40:34 crc kubenswrapper[4851]: E0223 13:40:34.969503 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:40:38 crc kubenswrapper[4851]: I0223 13:40:38.982005 4851 scope.go:117] "RemoveContainer" containerID="5cfba63c563b7881e6751086ad316e7e9b34c43f7a2c138f864cbd384e383971" Feb 23 13:40:47 crc kubenswrapper[4851]: I0223 13:40:47.969046 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:40:47 crc kubenswrapper[4851]: E0223 13:40:47.969913 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:41:02 crc kubenswrapper[4851]: I0223 13:41:02.969618 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:41:02 crc kubenswrapper[4851]: E0223 13:41:02.970418 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:41:08 crc kubenswrapper[4851]: I0223 13:41:08.421214 4851 generic.go:334] "Generic (PLEG): container finished" podID="01be1f4b-d5f3-4dbe-b528-118617cdad1e" containerID="9f2055a6e528e7796f30d1177f3a2553d1f81f25762e0b726827c2967f70eb74" exitCode=0 Feb 23 13:41:08 crc kubenswrapper[4851]: I0223 13:41:08.421301 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" event={"ID":"01be1f4b-d5f3-4dbe-b528-118617cdad1e","Type":"ContainerDied","Data":"9f2055a6e528e7796f30d1177f3a2553d1f81f25762e0b726827c2967f70eb74"} Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.828851 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.941285 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.941825 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-inventory\") pod \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.942151 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkhfh\" (UniqueName: \"kubernetes.io/projected/01be1f4b-d5f3-4dbe-b528-118617cdad1e-kube-api-access-jkhfh\") pod \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.942523 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-nova-metadata-neutron-config-0\") pod \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.942796 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-ssh-key-openstack-edpm-ipam\") pod \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.943068 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-metadata-combined-ca-bundle\") pod \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\" (UID: \"01be1f4b-d5f3-4dbe-b528-118617cdad1e\") " Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.947061 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "01be1f4b-d5f3-4dbe-b528-118617cdad1e" (UID: "01be1f4b-d5f3-4dbe-b528-118617cdad1e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.947926 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01be1f4b-d5f3-4dbe-b528-118617cdad1e-kube-api-access-jkhfh" (OuterVolumeSpecName: "kube-api-access-jkhfh") pod "01be1f4b-d5f3-4dbe-b528-118617cdad1e" (UID: "01be1f4b-d5f3-4dbe-b528-118617cdad1e"). InnerVolumeSpecName "kube-api-access-jkhfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.967712 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "01be1f4b-d5f3-4dbe-b528-118617cdad1e" (UID: "01be1f4b-d5f3-4dbe-b528-118617cdad1e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.967914 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01be1f4b-d5f3-4dbe-b528-118617cdad1e" (UID: "01be1f4b-d5f3-4dbe-b528-118617cdad1e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.968544 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "01be1f4b-d5f3-4dbe-b528-118617cdad1e" (UID: "01be1f4b-d5f3-4dbe-b528-118617cdad1e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:41:09 crc kubenswrapper[4851]: I0223 13:41:09.976778 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-inventory" (OuterVolumeSpecName: "inventory") pod "01be1f4b-d5f3-4dbe-b528-118617cdad1e" (UID: "01be1f4b-d5f3-4dbe-b528-118617cdad1e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.046187 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkhfh\" (UniqueName: \"kubernetes.io/projected/01be1f4b-d5f3-4dbe-b528-118617cdad1e-kube-api-access-jkhfh\") on node \"crc\" DevicePath \"\"" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.046228 4851 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.046241 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.046253 4851 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.046269 4851 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.046282 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01be1f4b-d5f3-4dbe-b528-118617cdad1e-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.440744 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" event={"ID":"01be1f4b-d5f3-4dbe-b528-118617cdad1e","Type":"ContainerDied","Data":"603fff877405c39ee3f080396146e31c7e7d5b22be050e04feefabc30fc95b34"} Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.440801 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="603fff877405c39ee3f080396146e31c7e7d5b22be050e04feefabc30fc95b34" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.440878 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.611679 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5"] Feb 23 13:41:10 crc kubenswrapper[4851]: E0223 13:41:10.612573 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01be1f4b-d5f3-4dbe-b528-118617cdad1e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.612599 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="01be1f4b-d5f3-4dbe-b528-118617cdad1e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.612811 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="01be1f4b-d5f3-4dbe-b528-118617cdad1e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.613642 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.615705 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.616350 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.617643 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.617679 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.617893 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.621588 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5"] Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.657678 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.657817 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.657895 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.657953 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcch\" (UniqueName: \"kubernetes.io/projected/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-kube-api-access-mmcch\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.657987 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.759441 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcch\" (UniqueName: \"kubernetes.io/projected/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-kube-api-access-mmcch\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.759714 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.759796 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.759918 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.760129 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.765168 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.765284 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.765980 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.766985 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.784780 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcch\" (UniqueName: \"kubernetes.io/projected/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-kube-api-access-mmcch\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:10 crc kubenswrapper[4851]: I0223 13:41:10.929761 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:41:11 crc kubenswrapper[4851]: I0223 13:41:11.423718 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5"] Feb 23 13:41:11 crc kubenswrapper[4851]: I0223 13:41:11.451256 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" event={"ID":"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8","Type":"ContainerStarted","Data":"79685fdfa73399e0be8f9c394b3b4da83168567d64bc24dc659da07c6d666cbc"} Feb 23 13:41:12 crc kubenswrapper[4851]: I0223 13:41:12.466700 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" event={"ID":"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8","Type":"ContainerStarted","Data":"c6ecdd44ea9d2892b99655e46643d1f7ca4debe8728110ea6e42ff461d01cb3e"} Feb 23 13:41:12 crc kubenswrapper[4851]: I0223 13:41:12.491465 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" podStartSLOduration=1.9431981619999998 podStartE2EDuration="2.491447054s" podCreationTimestamp="2026-02-23 13:41:10 +0000 UTC" firstStartedPulling="2026-02-23 13:41:11.43139472 +0000 UTC m=+2026.113098398" lastFinishedPulling="2026-02-23 13:41:11.979643612 +0000 UTC m=+2026.661347290" observedRunningTime="2026-02-23 13:41:12.489492389 +0000 UTC m=+2027.171196067" watchObservedRunningTime="2026-02-23 13:41:12.491447054 +0000 UTC m=+2027.173150722" Feb 23 13:41:17 crc kubenswrapper[4851]: I0223 13:41:17.969118 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:41:17 crc kubenswrapper[4851]: E0223 13:41:17.969938 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:41:29 crc kubenswrapper[4851]: I0223 13:41:29.969507 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:41:29 crc kubenswrapper[4851]: E0223 13:41:29.970297 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:41:43 crc kubenswrapper[4851]: I0223 13:41:43.968875 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:41:43 crc kubenswrapper[4851]: E0223 13:41:43.969898 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:41:57 crc kubenswrapper[4851]: I0223 13:41:57.968922 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:41:57 crc kubenswrapper[4851]: E0223 13:41:57.969692 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.373102 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sm7j8"] Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.375689 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.386419 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sm7j8"] Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.531058 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhz97\" (UniqueName: \"kubernetes.io/projected/121a12d3-4047-4a68-8487-f38a840d6bc5-kube-api-access-xhz97\") pod \"redhat-marketplace-sm7j8\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.531138 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-utilities\") pod \"redhat-marketplace-sm7j8\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.531176 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-catalog-content\") pod \"redhat-marketplace-sm7j8\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.633287 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-utilities\") pod \"redhat-marketplace-sm7j8\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.633362 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-catalog-content\") pod \"redhat-marketplace-sm7j8\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.633484 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhz97\" (UniqueName: \"kubernetes.io/projected/121a12d3-4047-4a68-8487-f38a840d6bc5-kube-api-access-xhz97\") pod \"redhat-marketplace-sm7j8\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.634222 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-utilities\") pod \"redhat-marketplace-sm7j8\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.634529 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-catalog-content\") pod \"redhat-marketplace-sm7j8\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.651822 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhz97\" (UniqueName: \"kubernetes.io/projected/121a12d3-4047-4a68-8487-f38a840d6bc5-kube-api-access-xhz97\") pod \"redhat-marketplace-sm7j8\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:09 crc kubenswrapper[4851]: I0223 13:42:09.710899 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:10 crc kubenswrapper[4851]: I0223 13:42:10.187257 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sm7j8"] Feb 23 13:42:10 crc kubenswrapper[4851]: I0223 13:42:10.941147 4851 generic.go:334] "Generic (PLEG): container finished" podID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerID="57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64" exitCode=0 Feb 23 13:42:10 crc kubenswrapper[4851]: I0223 13:42:10.941202 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sm7j8" event={"ID":"121a12d3-4047-4a68-8487-f38a840d6bc5","Type":"ContainerDied","Data":"57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64"} Feb 23 13:42:10 crc kubenswrapper[4851]: I0223 13:42:10.941456 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sm7j8" event={"ID":"121a12d3-4047-4a68-8487-f38a840d6bc5","Type":"ContainerStarted","Data":"f582696ce4eed82994f321f4515670b1d12049f8c89dc6690205d7eb2b2828f8"} Feb 23 13:42:10 crc kubenswrapper[4851]: I0223 13:42:10.942803 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:42:11 crc kubenswrapper[4851]: I0223 13:42:11.968956 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:42:12 crc kubenswrapper[4851]: I0223 13:42:12.958299 4851 generic.go:334] "Generic (PLEG): container finished" podID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerID="4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12" exitCode=0 Feb 23 13:42:12 crc kubenswrapper[4851]: I0223 13:42:12.958357 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sm7j8" event={"ID":"121a12d3-4047-4a68-8487-f38a840d6bc5","Type":"ContainerDied","Data":"4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12"} Feb 23 13:42:12 crc kubenswrapper[4851]: I0223 13:42:12.961344 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"5968e7d1e8349088295a783895d1c083589df7bdc9df8e8ca121f9e40be40081"} Feb 23 13:42:13 crc kubenswrapper[4851]: I0223 13:42:13.978665 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sm7j8" event={"ID":"121a12d3-4047-4a68-8487-f38a840d6bc5","Type":"ContainerStarted","Data":"eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0"} Feb 23 13:42:13 crc kubenswrapper[4851]: I0223 13:42:13.996320 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sm7j8" podStartSLOduration=2.487504419 podStartE2EDuration="4.996298029s" podCreationTimestamp="2026-02-23 13:42:09 +0000 UTC" firstStartedPulling="2026-02-23 13:42:10.942587091 +0000 UTC m=+2085.624290769" lastFinishedPulling="2026-02-23 13:42:13.451380691 +0000 UTC m=+2088.133084379" observedRunningTime="2026-02-23 13:42:13.989284091 +0000 UTC m=+2088.670987769" watchObservedRunningTime="2026-02-23 13:42:13.996298029 +0000 UTC m=+2088.678001707" Feb 23 13:42:19 crc kubenswrapper[4851]: I0223 13:42:19.712752 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:19 crc kubenswrapper[4851]: I0223 13:42:19.713389 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:19 crc kubenswrapper[4851]: I0223 13:42:19.756088 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:20 crc kubenswrapper[4851]: I0223 13:42:20.059274 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:20 crc kubenswrapper[4851]: I0223 13:42:20.112779 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sm7j8"] Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.030906 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sm7j8" podUID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerName="registry-server" containerID="cri-o://eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0" gracePeriod=2 Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.574777 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.666280 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhz97\" (UniqueName: \"kubernetes.io/projected/121a12d3-4047-4a68-8487-f38a840d6bc5-kube-api-access-xhz97\") pod \"121a12d3-4047-4a68-8487-f38a840d6bc5\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.666510 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-utilities\") pod \"121a12d3-4047-4a68-8487-f38a840d6bc5\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.666589 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-catalog-content\") pod \"121a12d3-4047-4a68-8487-f38a840d6bc5\" (UID: \"121a12d3-4047-4a68-8487-f38a840d6bc5\") " Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.671606 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-utilities" (OuterVolumeSpecName: "utilities") pod "121a12d3-4047-4a68-8487-f38a840d6bc5" (UID: "121a12d3-4047-4a68-8487-f38a840d6bc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.676666 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/121a12d3-4047-4a68-8487-f38a840d6bc5-kube-api-access-xhz97" (OuterVolumeSpecName: "kube-api-access-xhz97") pod "121a12d3-4047-4a68-8487-f38a840d6bc5" (UID: "121a12d3-4047-4a68-8487-f38a840d6bc5"). InnerVolumeSpecName "kube-api-access-xhz97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.768449 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.768489 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhz97\" (UniqueName: \"kubernetes.io/projected/121a12d3-4047-4a68-8487-f38a840d6bc5-kube-api-access-xhz97\") on node \"crc\" DevicePath \"\"" Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.789620 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "121a12d3-4047-4a68-8487-f38a840d6bc5" (UID: "121a12d3-4047-4a68-8487-f38a840d6bc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:42:22 crc kubenswrapper[4851]: I0223 13:42:22.869731 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/121a12d3-4047-4a68-8487-f38a840d6bc5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.040661 4851 generic.go:334] "Generic (PLEG): container finished" podID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerID="eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0" exitCode=0 Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.040703 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sm7j8" event={"ID":"121a12d3-4047-4a68-8487-f38a840d6bc5","Type":"ContainerDied","Data":"eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0"} Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.040730 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sm7j8" event={"ID":"121a12d3-4047-4a68-8487-f38a840d6bc5","Type":"ContainerDied","Data":"f582696ce4eed82994f321f4515670b1d12049f8c89dc6690205d7eb2b2828f8"} Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.040747 4851 scope.go:117] "RemoveContainer" containerID="eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.040749 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sm7j8" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.067011 4851 scope.go:117] "RemoveContainer" containerID="4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.075356 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sm7j8"] Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.086868 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sm7j8"] Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.119056 4851 scope.go:117] "RemoveContainer" containerID="57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.137887 4851 scope.go:117] "RemoveContainer" containerID="eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0" Feb 23 13:42:23 crc kubenswrapper[4851]: E0223 13:42:23.138311 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0\": container with ID starting with eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0 not found: ID does not exist" containerID="eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.138404 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0"} err="failed to get container status \"eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0\": rpc error: code = NotFound desc = could not find container \"eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0\": container with ID starting with eba245fb08e39871fb36bb47e173cc980197d59d85695476209e9193529e90b0 not found: ID does not exist" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.138430 4851 scope.go:117] "RemoveContainer" containerID="4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12" Feb 23 13:42:23 crc kubenswrapper[4851]: E0223 13:42:23.138764 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12\": container with ID starting with 4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12 not found: ID does not exist" containerID="4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.138800 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12"} err="failed to get container status \"4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12\": rpc error: code = NotFound desc = could not find container \"4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12\": container with ID starting with 4e3399205a3cab2693f126dd32f72e9ee7b76add6213908a818844ae4ddffa12 not found: ID does not exist" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.138828 4851 scope.go:117] "RemoveContainer" containerID="57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64" Feb 23 13:42:23 crc kubenswrapper[4851]: E0223 13:42:23.139138 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64\": container with ID starting with 57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64 not found: ID does not exist" containerID="57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.139160 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64"} err="failed to get container status \"57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64\": rpc error: code = NotFound desc = could not find container \"57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64\": container with ID starting with 57a2bae291b0f92396be5b843ec4db8f8a3bf619479c3fa452164bede987ce64 not found: ID does not exist" Feb 23 13:42:23 crc kubenswrapper[4851]: I0223 13:42:23.979675 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="121a12d3-4047-4a68-8487-f38a840d6bc5" path="/var/lib/kubelet/pods/121a12d3-4047-4a68-8487-f38a840d6bc5/volumes" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.398522 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qvpr7"] Feb 23 13:42:26 crc kubenswrapper[4851]: E0223 13:42:26.399114 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerName="extract-content" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.399127 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerName="extract-content" Feb 23 13:42:26 crc kubenswrapper[4851]: E0223 13:42:26.399142 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerName="registry-server" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.399149 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerName="registry-server" Feb 23 13:42:26 crc kubenswrapper[4851]: E0223 13:42:26.399159 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerName="extract-utilities" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.399166 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerName="extract-utilities" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.399377 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="121a12d3-4047-4a68-8487-f38a840d6bc5" containerName="registry-server" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.400783 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.413546 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvpr7"] Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.537491 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-catalog-content\") pod \"community-operators-qvpr7\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.538085 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-utilities\") pod \"community-operators-qvpr7\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.538164 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwsf6\" (UniqueName: \"kubernetes.io/projected/a7ee4f7c-7890-4cb0-b903-69dac67ee806-kube-api-access-gwsf6\") pod \"community-operators-qvpr7\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.639852 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-utilities\") pod \"community-operators-qvpr7\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.640147 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwsf6\" (UniqueName: \"kubernetes.io/projected/a7ee4f7c-7890-4cb0-b903-69dac67ee806-kube-api-access-gwsf6\") pod \"community-operators-qvpr7\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.640309 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-catalog-content\") pod \"community-operators-qvpr7\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.640503 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-utilities\") pod \"community-operators-qvpr7\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.640792 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-catalog-content\") pod \"community-operators-qvpr7\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.663550 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwsf6\" (UniqueName: \"kubernetes.io/projected/a7ee4f7c-7890-4cb0-b903-69dac67ee806-kube-api-access-gwsf6\") pod \"community-operators-qvpr7\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:26 crc kubenswrapper[4851]: I0223 13:42:26.724826 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:27 crc kubenswrapper[4851]: I0223 13:42:27.271972 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvpr7"] Feb 23 13:42:27 crc kubenswrapper[4851]: W0223 13:42:27.274572 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7ee4f7c_7890_4cb0_b903_69dac67ee806.slice/crio-59cc751ce41d773f1b79d05e0a63179b255f4fa0017c2f755b65c9c28c872077 WatchSource:0}: Error finding container 59cc751ce41d773f1b79d05e0a63179b255f4fa0017c2f755b65c9c28c872077: Status 404 returned error can't find the container with id 59cc751ce41d773f1b79d05e0a63179b255f4fa0017c2f755b65c9c28c872077 Feb 23 13:42:28 crc kubenswrapper[4851]: I0223 13:42:28.094890 4851 generic.go:334] "Generic (PLEG): container finished" podID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerID="50328108847b52d92b80949815e6b11bbd7185a360b8bf40ec412441d5add3e5" exitCode=0 Feb 23 13:42:28 crc kubenswrapper[4851]: I0223 13:42:28.094951 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvpr7" event={"ID":"a7ee4f7c-7890-4cb0-b903-69dac67ee806","Type":"ContainerDied","Data":"50328108847b52d92b80949815e6b11bbd7185a360b8bf40ec412441d5add3e5"} Feb 23 13:42:28 crc kubenswrapper[4851]: I0223 13:42:28.095370 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvpr7" event={"ID":"a7ee4f7c-7890-4cb0-b903-69dac67ee806","Type":"ContainerStarted","Data":"59cc751ce41d773f1b79d05e0a63179b255f4fa0017c2f755b65c9c28c872077"} Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.104512 4851 generic.go:334] "Generic (PLEG): container finished" podID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerID="3d03493f6f3afb4cdefccf08a641a4672876ad8b42f37c3d2f4fd9832648158d" exitCode=0 Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.104662 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvpr7" event={"ID":"a7ee4f7c-7890-4cb0-b903-69dac67ee806","Type":"ContainerDied","Data":"3d03493f6f3afb4cdefccf08a641a4672876ad8b42f37c3d2f4fd9832648158d"} Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.396477 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5cpt7"] Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.398412 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.409306 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cpt7"] Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.493992 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-catalog-content\") pod \"redhat-operators-5cpt7\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.494476 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqhn6\" (UniqueName: \"kubernetes.io/projected/0d8d064c-656b-4f8d-9b2d-a8d5be140416-kube-api-access-xqhn6\") pod \"redhat-operators-5cpt7\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.494598 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-utilities\") pod \"redhat-operators-5cpt7\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.596856 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqhn6\" (UniqueName: \"kubernetes.io/projected/0d8d064c-656b-4f8d-9b2d-a8d5be140416-kube-api-access-xqhn6\") pod \"redhat-operators-5cpt7\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.596935 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-utilities\") pod \"redhat-operators-5cpt7\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.596995 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-catalog-content\") pod \"redhat-operators-5cpt7\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.597611 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-catalog-content\") pod \"redhat-operators-5cpt7\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.597687 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-utilities\") pod \"redhat-operators-5cpt7\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.623066 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqhn6\" (UniqueName: \"kubernetes.io/projected/0d8d064c-656b-4f8d-9b2d-a8d5be140416-kube-api-access-xqhn6\") pod \"redhat-operators-5cpt7\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:29 crc kubenswrapper[4851]: I0223 13:42:29.782498 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:30 crc kubenswrapper[4851]: I0223 13:42:30.119480 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvpr7" event={"ID":"a7ee4f7c-7890-4cb0-b903-69dac67ee806","Type":"ContainerStarted","Data":"850ebbf8b74681778fe3bcd9bedaa6705701b815c3f49516025ff8e7688b4480"} Feb 23 13:42:30 crc kubenswrapper[4851]: I0223 13:42:30.153864 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qvpr7" podStartSLOduration=2.736714598 podStartE2EDuration="4.153844562s" podCreationTimestamp="2026-02-23 13:42:26 +0000 UTC" firstStartedPulling="2026-02-23 13:42:28.096298113 +0000 UTC m=+2102.778001791" lastFinishedPulling="2026-02-23 13:42:29.513428067 +0000 UTC m=+2104.195131755" observedRunningTime="2026-02-23 13:42:30.148735688 +0000 UTC m=+2104.830439386" watchObservedRunningTime="2026-02-23 13:42:30.153844562 +0000 UTC m=+2104.835548250" Feb 23 13:42:30 crc kubenswrapper[4851]: I0223 13:42:30.250949 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5cpt7"] Feb 23 13:42:31 crc kubenswrapper[4851]: I0223 13:42:31.130241 4851 generic.go:334] "Generic (PLEG): container finished" podID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerID="1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319" exitCode=0 Feb 23 13:42:31 crc kubenswrapper[4851]: I0223 13:42:31.130303 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cpt7" event={"ID":"0d8d064c-656b-4f8d-9b2d-a8d5be140416","Type":"ContainerDied","Data":"1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319"} Feb 23 13:42:31 crc kubenswrapper[4851]: I0223 13:42:31.130358 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cpt7" event={"ID":"0d8d064c-656b-4f8d-9b2d-a8d5be140416","Type":"ContainerStarted","Data":"cc3787a4ecca996eb08b86f254e298f5251d27bda966465e9495d82362a9c283"} Feb 23 13:42:32 crc kubenswrapper[4851]: I0223 13:42:32.139177 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cpt7" event={"ID":"0d8d064c-656b-4f8d-9b2d-a8d5be140416","Type":"ContainerStarted","Data":"269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3"} Feb 23 13:42:33 crc kubenswrapper[4851]: I0223 13:42:33.150379 4851 generic.go:334] "Generic (PLEG): container finished" podID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerID="269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3" exitCode=0 Feb 23 13:42:33 crc kubenswrapper[4851]: I0223 13:42:33.150422 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cpt7" event={"ID":"0d8d064c-656b-4f8d-9b2d-a8d5be140416","Type":"ContainerDied","Data":"269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3"} Feb 23 13:42:34 crc kubenswrapper[4851]: I0223 13:42:34.179406 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cpt7" event={"ID":"0d8d064c-656b-4f8d-9b2d-a8d5be140416","Type":"ContainerStarted","Data":"b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70"} Feb 23 13:42:36 crc kubenswrapper[4851]: I0223 13:42:36.725711 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:36 crc kubenswrapper[4851]: I0223 13:42:36.726287 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:36 crc kubenswrapper[4851]: I0223 13:42:36.772773 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:36 crc kubenswrapper[4851]: I0223 13:42:36.800047 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5cpt7" podStartSLOduration=5.398278989 podStartE2EDuration="7.800030515s" podCreationTimestamp="2026-02-23 13:42:29 +0000 UTC" firstStartedPulling="2026-02-23 13:42:31.132343392 +0000 UTC m=+2105.814047070" lastFinishedPulling="2026-02-23 13:42:33.534094928 +0000 UTC m=+2108.215798596" observedRunningTime="2026-02-23 13:42:34.203338069 +0000 UTC m=+2108.885041757" watchObservedRunningTime="2026-02-23 13:42:36.800030515 +0000 UTC m=+2111.481734193" Feb 23 13:42:37 crc kubenswrapper[4851]: I0223 13:42:37.244581 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:38 crc kubenswrapper[4851]: I0223 13:42:38.197540 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvpr7"] Feb 23 13:42:39 crc kubenswrapper[4851]: I0223 13:42:39.216659 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qvpr7" podUID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerName="registry-server" containerID="cri-o://850ebbf8b74681778fe3bcd9bedaa6705701b815c3f49516025ff8e7688b4480" gracePeriod=2 Feb 23 13:42:39 crc kubenswrapper[4851]: I0223 13:42:39.783168 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:39 crc kubenswrapper[4851]: I0223 13:42:39.783229 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:39 crc kubenswrapper[4851]: I0223 13:42:39.831448 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:40 crc kubenswrapper[4851]: I0223 13:42:40.266645 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.235595 4851 generic.go:334] "Generic (PLEG): container finished" podID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerID="850ebbf8b74681778fe3bcd9bedaa6705701b815c3f49516025ff8e7688b4480" exitCode=0 Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.235701 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvpr7" event={"ID":"a7ee4f7c-7890-4cb0-b903-69dac67ee806","Type":"ContainerDied","Data":"850ebbf8b74681778fe3bcd9bedaa6705701b815c3f49516025ff8e7688b4480"} Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.390197 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cpt7"] Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.508854 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.618789 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwsf6\" (UniqueName: \"kubernetes.io/projected/a7ee4f7c-7890-4cb0-b903-69dac67ee806-kube-api-access-gwsf6\") pod \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.618975 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-catalog-content\") pod \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.619091 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-utilities\") pod \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\" (UID: \"a7ee4f7c-7890-4cb0-b903-69dac67ee806\") " Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.620148 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-utilities" (OuterVolumeSpecName: "utilities") pod "a7ee4f7c-7890-4cb0-b903-69dac67ee806" (UID: "a7ee4f7c-7890-4cb0-b903-69dac67ee806"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.624980 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ee4f7c-7890-4cb0-b903-69dac67ee806-kube-api-access-gwsf6" (OuterVolumeSpecName: "kube-api-access-gwsf6") pod "a7ee4f7c-7890-4cb0-b903-69dac67ee806" (UID: "a7ee4f7c-7890-4cb0-b903-69dac67ee806"). InnerVolumeSpecName "kube-api-access-gwsf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.663839 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7ee4f7c-7890-4cb0-b903-69dac67ee806" (UID: "a7ee4f7c-7890-4cb0-b903-69dac67ee806"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.721291 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.721322 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7ee4f7c-7890-4cb0-b903-69dac67ee806-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:42:41 crc kubenswrapper[4851]: I0223 13:42:41.721345 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwsf6\" (UniqueName: \"kubernetes.io/projected/a7ee4f7c-7890-4cb0-b903-69dac67ee806-kube-api-access-gwsf6\") on node \"crc\" DevicePath \"\"" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.246881 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvpr7" event={"ID":"a7ee4f7c-7890-4cb0-b903-69dac67ee806","Type":"ContainerDied","Data":"59cc751ce41d773f1b79d05e0a63179b255f4fa0017c2f755b65c9c28c872077"} Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.246918 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvpr7" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.247213 4851 scope.go:117] "RemoveContainer" containerID="850ebbf8b74681778fe3bcd9bedaa6705701b815c3f49516025ff8e7688b4480" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.247033 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5cpt7" podUID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerName="registry-server" containerID="cri-o://b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70" gracePeriod=2 Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.274483 4851 scope.go:117] "RemoveContainer" containerID="3d03493f6f3afb4cdefccf08a641a4672876ad8b42f37c3d2f4fd9832648158d" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.277356 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qvpr7"] Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.296283 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qvpr7"] Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.315024 4851 scope.go:117] "RemoveContainer" containerID="50328108847b52d92b80949815e6b11bbd7185a360b8bf40ec412441d5add3e5" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.719368 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.841624 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-utilities\") pod \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.841751 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-catalog-content\") pod \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.841886 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqhn6\" (UniqueName: \"kubernetes.io/projected/0d8d064c-656b-4f8d-9b2d-a8d5be140416-kube-api-access-xqhn6\") pod \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\" (UID: \"0d8d064c-656b-4f8d-9b2d-a8d5be140416\") " Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.842885 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-utilities" (OuterVolumeSpecName: "utilities") pod "0d8d064c-656b-4f8d-9b2d-a8d5be140416" (UID: "0d8d064c-656b-4f8d-9b2d-a8d5be140416"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.846894 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d8d064c-656b-4f8d-9b2d-a8d5be140416-kube-api-access-xqhn6" (OuterVolumeSpecName: "kube-api-access-xqhn6") pod "0d8d064c-656b-4f8d-9b2d-a8d5be140416" (UID: "0d8d064c-656b-4f8d-9b2d-a8d5be140416"). InnerVolumeSpecName "kube-api-access-xqhn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.944071 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.944101 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqhn6\" (UniqueName: \"kubernetes.io/projected/0d8d064c-656b-4f8d-9b2d-a8d5be140416-kube-api-access-xqhn6\") on node \"crc\" DevicePath \"\"" Feb 23 13:42:42 crc kubenswrapper[4851]: I0223 13:42:42.954456 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d8d064c-656b-4f8d-9b2d-a8d5be140416" (UID: "0d8d064c-656b-4f8d-9b2d-a8d5be140416"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.045890 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d8d064c-656b-4f8d-9b2d-a8d5be140416-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.258659 4851 generic.go:334] "Generic (PLEG): container finished" podID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerID="b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70" exitCode=0 Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.258720 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5cpt7" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.258738 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cpt7" event={"ID":"0d8d064c-656b-4f8d-9b2d-a8d5be140416","Type":"ContainerDied","Data":"b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70"} Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.258803 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5cpt7" event={"ID":"0d8d064c-656b-4f8d-9b2d-a8d5be140416","Type":"ContainerDied","Data":"cc3787a4ecca996eb08b86f254e298f5251d27bda966465e9495d82362a9c283"} Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.258824 4851 scope.go:117] "RemoveContainer" containerID="b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.276354 4851 scope.go:117] "RemoveContainer" containerID="269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.302075 4851 scope.go:117] "RemoveContainer" containerID="1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.305344 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5cpt7"] Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.317739 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5cpt7"] Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.328980 4851 scope.go:117] "RemoveContainer" containerID="b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70" Feb 23 13:42:43 crc kubenswrapper[4851]: E0223 13:42:43.329821 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70\": container with ID starting with b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70 not found: ID does not exist" containerID="b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.329876 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70"} err="failed to get container status \"b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70\": rpc error: code = NotFound desc = could not find container \"b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70\": container with ID starting with b5a761683dc0196f5d927ab353bbf7ee6fe3312d6d32c38416be41db1c31ad70 not found: ID does not exist" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.329915 4851 scope.go:117] "RemoveContainer" containerID="269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3" Feb 23 13:42:43 crc kubenswrapper[4851]: E0223 13:42:43.330355 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3\": container with ID starting with 269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3 not found: ID does not exist" containerID="269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.330388 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3"} err="failed to get container status \"269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3\": rpc error: code = NotFound desc = could not find container \"269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3\": container with ID starting with 269ef0b25b6dbfccbec113e14e50b46abea33ddbdc0fd3770f5c888b3d9353f3 not found: ID does not exist" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.330406 4851 scope.go:117] "RemoveContainer" containerID="1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319" Feb 23 13:42:43 crc kubenswrapper[4851]: E0223 13:42:43.330786 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319\": container with ID starting with 1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319 not found: ID does not exist" containerID="1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.330840 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319"} err="failed to get container status \"1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319\": rpc error: code = NotFound desc = could not find container \"1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319\": container with ID starting with 1e51ecd21e3d8b541ae9674152483ae1afcdd6f2e8392388785d541b6f923319 not found: ID does not exist" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.979005 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" path="/var/lib/kubelet/pods/0d8d064c-656b-4f8d-9b2d-a8d5be140416/volumes" Feb 23 13:42:43 crc kubenswrapper[4851]: I0223 13:42:43.979681 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" path="/var/lib/kubelet/pods/a7ee4f7c-7890-4cb0-b903-69dac67ee806/volumes" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.203228 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-57tx4"] Feb 23 13:43:16 crc kubenswrapper[4851]: E0223 13:43:16.204717 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerName="registry-server" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.204738 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerName="registry-server" Feb 23 13:43:16 crc kubenswrapper[4851]: E0223 13:43:16.204773 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerName="extract-utilities" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.204780 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerName="extract-utilities" Feb 23 13:43:16 crc kubenswrapper[4851]: E0223 13:43:16.204788 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerName="extract-content" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.204797 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerName="extract-content" Feb 23 13:43:16 crc kubenswrapper[4851]: E0223 13:43:16.204817 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerName="extract-content" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.204828 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerName="extract-content" Feb 23 13:43:16 crc kubenswrapper[4851]: E0223 13:43:16.204840 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerName="registry-server" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.204847 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerName="registry-server" Feb 23 13:43:16 crc kubenswrapper[4851]: E0223 13:43:16.204866 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerName="extract-utilities" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.204872 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerName="extract-utilities" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.205151 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d8d064c-656b-4f8d-9b2d-a8d5be140416" containerName="registry-server" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.205172 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ee4f7c-7890-4cb0-b903-69dac67ee806" containerName="registry-server" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.209226 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.223955 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57tx4"] Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.339628 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxjg5\" (UniqueName: \"kubernetes.io/projected/6e57a98a-9004-4e67-b97d-98447109f48b-kube-api-access-kxjg5\") pod \"certified-operators-57tx4\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.340347 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-utilities\") pod \"certified-operators-57tx4\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.340409 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-catalog-content\") pod \"certified-operators-57tx4\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.442982 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-utilities\") pod \"certified-operators-57tx4\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.443053 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-catalog-content\") pod \"certified-operators-57tx4\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.443169 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxjg5\" (UniqueName: \"kubernetes.io/projected/6e57a98a-9004-4e67-b97d-98447109f48b-kube-api-access-kxjg5\") pod \"certified-operators-57tx4\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.443532 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-utilities\") pod \"certified-operators-57tx4\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.443702 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-catalog-content\") pod \"certified-operators-57tx4\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.465772 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxjg5\" (UniqueName: \"kubernetes.io/projected/6e57a98a-9004-4e67-b97d-98447109f48b-kube-api-access-kxjg5\") pod \"certified-operators-57tx4\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:16 crc kubenswrapper[4851]: I0223 13:43:16.541128 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:17 crc kubenswrapper[4851]: I0223 13:43:17.064114 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57tx4"] Feb 23 13:43:17 crc kubenswrapper[4851]: I0223 13:43:17.522188 4851 generic.go:334] "Generic (PLEG): container finished" podID="6e57a98a-9004-4e67-b97d-98447109f48b" containerID="b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7" exitCode=0 Feb 23 13:43:17 crc kubenswrapper[4851]: I0223 13:43:17.522227 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57tx4" event={"ID":"6e57a98a-9004-4e67-b97d-98447109f48b","Type":"ContainerDied","Data":"b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7"} Feb 23 13:43:17 crc kubenswrapper[4851]: I0223 13:43:17.522495 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57tx4" event={"ID":"6e57a98a-9004-4e67-b97d-98447109f48b","Type":"ContainerStarted","Data":"520eb42f5657e88381e593664d25e8d73e6d949e6a6bf438f935f39028937614"} Feb 23 13:43:18 crc kubenswrapper[4851]: I0223 13:43:18.531314 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57tx4" event={"ID":"6e57a98a-9004-4e67-b97d-98447109f48b","Type":"ContainerStarted","Data":"51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c"} Feb 23 13:43:19 crc kubenswrapper[4851]: I0223 13:43:19.540686 4851 generic.go:334] "Generic (PLEG): container finished" podID="6e57a98a-9004-4e67-b97d-98447109f48b" containerID="51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c" exitCode=0 Feb 23 13:43:19 crc kubenswrapper[4851]: I0223 13:43:19.540740 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57tx4" event={"ID":"6e57a98a-9004-4e67-b97d-98447109f48b","Type":"ContainerDied","Data":"51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c"} Feb 23 13:43:20 crc kubenswrapper[4851]: I0223 13:43:20.551187 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57tx4" event={"ID":"6e57a98a-9004-4e67-b97d-98447109f48b","Type":"ContainerStarted","Data":"5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5"} Feb 23 13:43:20 crc kubenswrapper[4851]: I0223 13:43:20.578263 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-57tx4" podStartSLOduration=2.138625843 podStartE2EDuration="4.578229376s" podCreationTimestamp="2026-02-23 13:43:16 +0000 UTC" firstStartedPulling="2026-02-23 13:43:17.523809365 +0000 UTC m=+2152.205513063" lastFinishedPulling="2026-02-23 13:43:19.963412918 +0000 UTC m=+2154.645116596" observedRunningTime="2026-02-23 13:43:20.575260092 +0000 UTC m=+2155.256963780" watchObservedRunningTime="2026-02-23 13:43:20.578229376 +0000 UTC m=+2155.259933054" Feb 23 13:43:26 crc kubenswrapper[4851]: I0223 13:43:26.541986 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:26 crc kubenswrapper[4851]: I0223 13:43:26.542358 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:26 crc kubenswrapper[4851]: I0223 13:43:26.592291 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:26 crc kubenswrapper[4851]: I0223 13:43:26.647545 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:26 crc kubenswrapper[4851]: I0223 13:43:26.833099 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57tx4"] Feb 23 13:43:28 crc kubenswrapper[4851]: I0223 13:43:28.608488 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-57tx4" podUID="6e57a98a-9004-4e67-b97d-98447109f48b" containerName="registry-server" containerID="cri-o://5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5" gracePeriod=2 Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.115591 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.311108 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxjg5\" (UniqueName: \"kubernetes.io/projected/6e57a98a-9004-4e67-b97d-98447109f48b-kube-api-access-kxjg5\") pod \"6e57a98a-9004-4e67-b97d-98447109f48b\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.311194 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-utilities\") pod \"6e57a98a-9004-4e67-b97d-98447109f48b\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.311305 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-catalog-content\") pod \"6e57a98a-9004-4e67-b97d-98447109f48b\" (UID: \"6e57a98a-9004-4e67-b97d-98447109f48b\") " Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.312154 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-utilities" (OuterVolumeSpecName: "utilities") pod "6e57a98a-9004-4e67-b97d-98447109f48b" (UID: "6e57a98a-9004-4e67-b97d-98447109f48b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.316970 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e57a98a-9004-4e67-b97d-98447109f48b-kube-api-access-kxjg5" (OuterVolumeSpecName: "kube-api-access-kxjg5") pod "6e57a98a-9004-4e67-b97d-98447109f48b" (UID: "6e57a98a-9004-4e67-b97d-98447109f48b"). InnerVolumeSpecName "kube-api-access-kxjg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.365986 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e57a98a-9004-4e67-b97d-98447109f48b" (UID: "6e57a98a-9004-4e67-b97d-98447109f48b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.414378 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxjg5\" (UniqueName: \"kubernetes.io/projected/6e57a98a-9004-4e67-b97d-98447109f48b-kube-api-access-kxjg5\") on node \"crc\" DevicePath \"\"" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.414435 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.414449 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e57a98a-9004-4e67-b97d-98447109f48b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.620013 4851 generic.go:334] "Generic (PLEG): container finished" podID="6e57a98a-9004-4e67-b97d-98447109f48b" containerID="5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5" exitCode=0 Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.620065 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57tx4" event={"ID":"6e57a98a-9004-4e67-b97d-98447109f48b","Type":"ContainerDied","Data":"5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5"} Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.621113 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57tx4" event={"ID":"6e57a98a-9004-4e67-b97d-98447109f48b","Type":"ContainerDied","Data":"520eb42f5657e88381e593664d25e8d73e6d949e6a6bf438f935f39028937614"} Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.620178 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57tx4" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.621143 4851 scope.go:117] "RemoveContainer" containerID="5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.645266 4851 scope.go:117] "RemoveContainer" containerID="51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.669299 4851 scope.go:117] "RemoveContainer" containerID="b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.669456 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-57tx4"] Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.680751 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-57tx4"] Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.744447 4851 scope.go:117] "RemoveContainer" containerID="5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5" Feb 23 13:43:29 crc kubenswrapper[4851]: E0223 13:43:29.745017 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5\": container with ID starting with 5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5 not found: ID does not exist" containerID="5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.745065 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5"} err="failed to get container status \"5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5\": rpc error: code = NotFound desc = could not find container \"5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5\": container with ID starting with 5a7f145669f38dc7f3ffeff2938ba97f33c1b5bd4d02580b9a01db3b953abaf5 not found: ID does not exist" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.745095 4851 scope.go:117] "RemoveContainer" containerID="51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c" Feb 23 13:43:29 crc kubenswrapper[4851]: E0223 13:43:29.745795 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c\": container with ID starting with 51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c not found: ID does not exist" containerID="51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.745830 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c"} err="failed to get container status \"51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c\": rpc error: code = NotFound desc = could not find container \"51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c\": container with ID starting with 51b2e0387e60de75736d0403066bd96ee9ae473547b44cf859249915010c141c not found: ID does not exist" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.745854 4851 scope.go:117] "RemoveContainer" containerID="b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7" Feb 23 13:43:29 crc kubenswrapper[4851]: E0223 13:43:29.746260 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7\": container with ID starting with b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7 not found: ID does not exist" containerID="b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.746280 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7"} err="failed to get container status \"b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7\": rpc error: code = NotFound desc = could not find container \"b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7\": container with ID starting with b3b5836f526f1744692c26d7992a0c03f9aa0fe03a94245c2ef7ab47e1fa73c7 not found: ID does not exist" Feb 23 13:43:29 crc kubenswrapper[4851]: I0223 13:43:29.978886 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e57a98a-9004-4e67-b97d-98447109f48b" path="/var/lib/kubelet/pods/6e57a98a-9004-4e67-b97d-98447109f48b/volumes" Feb 23 13:44:33 crc kubenswrapper[4851]: I0223 13:44:33.207263 4851 generic.go:334] "Generic (PLEG): container finished" podID="7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8" containerID="c6ecdd44ea9d2892b99655e46643d1f7ca4debe8728110ea6e42ff461d01cb3e" exitCode=0 Feb 23 13:44:33 crc kubenswrapper[4851]: I0223 13:44:33.207368 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" event={"ID":"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8","Type":"ContainerDied","Data":"c6ecdd44ea9d2892b99655e46643d1f7ca4debe8728110ea6e42ff461d01cb3e"} Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.643670 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.686258 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-combined-ca-bundle\") pod \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.686297 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-secret-0\") pod \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.686326 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-ssh-key-openstack-edpm-ipam\") pod \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.686356 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmcch\" (UniqueName: \"kubernetes.io/projected/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-kube-api-access-mmcch\") pod \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.686415 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-inventory\") pod \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\" (UID: \"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8\") " Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.691999 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8" (UID: "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.694196 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-kube-api-access-mmcch" (OuterVolumeSpecName: "kube-api-access-mmcch") pod "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8" (UID: "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8"). InnerVolumeSpecName "kube-api-access-mmcch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.715289 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8" (UID: "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.716758 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8" (UID: "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.717109 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-inventory" (OuterVolumeSpecName: "inventory") pod "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8" (UID: "7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.788933 4851 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.788962 4851 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.788970 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.788979 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmcch\" (UniqueName: \"kubernetes.io/projected/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-kube-api-access-mmcch\") on node \"crc\" DevicePath \"\"" Feb 23 13:44:34 crc kubenswrapper[4851]: I0223 13:44:34.788987 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.225937 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" event={"ID":"7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8","Type":"ContainerDied","Data":"79685fdfa73399e0be8f9c394b3b4da83168567d64bc24dc659da07c6d666cbc"} Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.226013 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79685fdfa73399e0be8f9c394b3b4da83168567d64bc24dc659da07c6d666cbc" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.225969 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.314199 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg"] Feb 23 13:44:35 crc kubenswrapper[4851]: E0223 13:44:35.314665 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e57a98a-9004-4e67-b97d-98447109f48b" containerName="extract-content" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.314682 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e57a98a-9004-4e67-b97d-98447109f48b" containerName="extract-content" Feb 23 13:44:35 crc kubenswrapper[4851]: E0223 13:44:35.314696 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e57a98a-9004-4e67-b97d-98447109f48b" containerName="registry-server" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.314703 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e57a98a-9004-4e67-b97d-98447109f48b" containerName="registry-server" Feb 23 13:44:35 crc kubenswrapper[4851]: E0223 13:44:35.314729 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e57a98a-9004-4e67-b97d-98447109f48b" containerName="extract-utilities" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.314737 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e57a98a-9004-4e67-b97d-98447109f48b" containerName="extract-utilities" Feb 23 13:44:35 crc kubenswrapper[4851]: E0223 13:44:35.314754 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.314763 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.314932 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.314956 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e57a98a-9004-4e67-b97d-98447109f48b" containerName="registry-server" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.315536 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.320288 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.320544 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.320711 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.320836 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.320961 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.321070 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.321734 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.347151 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg"] Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399259 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399304 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mpwb\" (UniqueName: \"kubernetes.io/projected/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-kube-api-access-7mpwb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399425 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399457 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399489 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399543 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399590 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399643 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399685 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399713 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.399746 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.501752 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.501804 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.501848 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.501905 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.501955 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.502010 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.502050 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.502081 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.502114 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.502150 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.502179 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mpwb\" (UniqueName: \"kubernetes.io/projected/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-kube-api-access-7mpwb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.504986 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.505165 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.505560 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.506965 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.507390 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.508003 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.508915 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.508934 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.509157 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.515973 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.519960 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mpwb\" (UniqueName: \"kubernetes.io/projected/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-kube-api-access-7mpwb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-q95kg\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:35 crc kubenswrapper[4851]: I0223 13:44:35.638642 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:44:36 crc kubenswrapper[4851]: I0223 13:44:36.163675 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg"] Feb 23 13:44:36 crc kubenswrapper[4851]: I0223 13:44:36.234497 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" event={"ID":"85e1b392-9aa6-4cd1-93b0-fa3587de47ac","Type":"ContainerStarted","Data":"46763f61581b1a54d29fc5547d5272165c8e2e5fac8706cb63ca45b9b5eb9a11"} Feb 23 13:44:37 crc kubenswrapper[4851]: I0223 13:44:37.244091 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" event={"ID":"85e1b392-9aa6-4cd1-93b0-fa3587de47ac","Type":"ContainerStarted","Data":"ddfe4119eae040b3eab62f5f6c11d2db9ccf736dcc5a10d68c9edd13aa470668"} Feb 23 13:44:37 crc kubenswrapper[4851]: I0223 13:44:37.267601 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" podStartSLOduration=1.777851361 podStartE2EDuration="2.267579393s" podCreationTimestamp="2026-02-23 13:44:35 +0000 UTC" firstStartedPulling="2026-02-23 13:44:36.169218798 +0000 UTC m=+2230.850922476" lastFinishedPulling="2026-02-23 13:44:36.65894682 +0000 UTC m=+2231.340650508" observedRunningTime="2026-02-23 13:44:37.258379882 +0000 UTC m=+2231.940083580" watchObservedRunningTime="2026-02-23 13:44:37.267579393 +0000 UTC m=+2231.949283071" Feb 23 13:44:41 crc kubenswrapper[4851]: I0223 13:44:41.924830 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:44:41 crc kubenswrapper[4851]: I0223 13:44:41.925430 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.148945 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx"] Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.150903 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.153124 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.153304 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.159323 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx"] Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.253780 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kls\" (UniqueName: \"kubernetes.io/projected/589db2c4-6120-4625-8977-77d13e5949f5-kube-api-access-b7kls\") pod \"collect-profiles-29530905-n5gpx\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.253873 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589db2c4-6120-4625-8977-77d13e5949f5-config-volume\") pod \"collect-profiles-29530905-n5gpx\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.253966 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589db2c4-6120-4625-8977-77d13e5949f5-secret-volume\") pod \"collect-profiles-29530905-n5gpx\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.355877 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7kls\" (UniqueName: \"kubernetes.io/projected/589db2c4-6120-4625-8977-77d13e5949f5-kube-api-access-b7kls\") pod \"collect-profiles-29530905-n5gpx\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.356013 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589db2c4-6120-4625-8977-77d13e5949f5-config-volume\") pod \"collect-profiles-29530905-n5gpx\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.356037 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589db2c4-6120-4625-8977-77d13e5949f5-secret-volume\") pod \"collect-profiles-29530905-n5gpx\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.357482 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589db2c4-6120-4625-8977-77d13e5949f5-config-volume\") pod \"collect-profiles-29530905-n5gpx\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.362143 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589db2c4-6120-4625-8977-77d13e5949f5-secret-volume\") pod \"collect-profiles-29530905-n5gpx\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.377454 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7kls\" (UniqueName: \"kubernetes.io/projected/589db2c4-6120-4625-8977-77d13e5949f5-kube-api-access-b7kls\") pod \"collect-profiles-29530905-n5gpx\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:00 crc kubenswrapper[4851]: I0223 13:45:00.512905 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:01 crc kubenswrapper[4851]: I0223 13:45:01.000917 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx"] Feb 23 13:45:01 crc kubenswrapper[4851]: I0223 13:45:01.444793 4851 generic.go:334] "Generic (PLEG): container finished" podID="589db2c4-6120-4625-8977-77d13e5949f5" containerID="72880f242db202a5e3f3944aeb286228e574f54222a2907704a7f207c3c7c2bc" exitCode=0 Feb 23 13:45:01 crc kubenswrapper[4851]: I0223 13:45:01.445275 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" event={"ID":"589db2c4-6120-4625-8977-77d13e5949f5","Type":"ContainerDied","Data":"72880f242db202a5e3f3944aeb286228e574f54222a2907704a7f207c3c7c2bc"} Feb 23 13:45:01 crc kubenswrapper[4851]: I0223 13:45:01.445319 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" event={"ID":"589db2c4-6120-4625-8977-77d13e5949f5","Type":"ContainerStarted","Data":"c5350ec2ed6b3ee2076c190beafa551e082838cf0e6c76e18d0ce12c0b7a5ceb"} Feb 23 13:45:02 crc kubenswrapper[4851]: I0223 13:45:02.800050 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.001810 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589db2c4-6120-4625-8977-77d13e5949f5-config-volume\") pod \"589db2c4-6120-4625-8977-77d13e5949f5\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.002085 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7kls\" (UniqueName: \"kubernetes.io/projected/589db2c4-6120-4625-8977-77d13e5949f5-kube-api-access-b7kls\") pod \"589db2c4-6120-4625-8977-77d13e5949f5\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.002341 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589db2c4-6120-4625-8977-77d13e5949f5-secret-volume\") pod \"589db2c4-6120-4625-8977-77d13e5949f5\" (UID: \"589db2c4-6120-4625-8977-77d13e5949f5\") " Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.002794 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/589db2c4-6120-4625-8977-77d13e5949f5-config-volume" (OuterVolumeSpecName: "config-volume") pod "589db2c4-6120-4625-8977-77d13e5949f5" (UID: "589db2c4-6120-4625-8977-77d13e5949f5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.003266 4851 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589db2c4-6120-4625-8977-77d13e5949f5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.009193 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589db2c4-6120-4625-8977-77d13e5949f5-kube-api-access-b7kls" (OuterVolumeSpecName: "kube-api-access-b7kls") pod "589db2c4-6120-4625-8977-77d13e5949f5" (UID: "589db2c4-6120-4625-8977-77d13e5949f5"). InnerVolumeSpecName "kube-api-access-b7kls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.009380 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589db2c4-6120-4625-8977-77d13e5949f5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "589db2c4-6120-4625-8977-77d13e5949f5" (UID: "589db2c4-6120-4625-8977-77d13e5949f5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.104608 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7kls\" (UniqueName: \"kubernetes.io/projected/589db2c4-6120-4625-8977-77d13e5949f5-kube-api-access-b7kls\") on node \"crc\" DevicePath \"\"" Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.104938 4851 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/589db2c4-6120-4625-8977-77d13e5949f5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.485606 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" event={"ID":"589db2c4-6120-4625-8977-77d13e5949f5","Type":"ContainerDied","Data":"c5350ec2ed6b3ee2076c190beafa551e082838cf0e6c76e18d0ce12c0b7a5ceb"} Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.485658 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5350ec2ed6b3ee2076c190beafa551e082838cf0e6c76e18d0ce12c0b7a5ceb" Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.485689 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530905-n5gpx" Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.873903 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b"] Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.882014 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530860-zrt4b"] Feb 23 13:45:03 crc kubenswrapper[4851]: I0223 13:45:03.979249 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c81e69e-6d53-4016-b87e-bdc816dc0365" path="/var/lib/kubelet/pods/3c81e69e-6d53-4016-b87e-bdc816dc0365/volumes" Feb 23 13:45:11 crc kubenswrapper[4851]: I0223 13:45:11.924997 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:45:11 crc kubenswrapper[4851]: I0223 13:45:11.925247 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:45:39 crc kubenswrapper[4851]: I0223 13:45:39.166299 4851 scope.go:117] "RemoveContainer" containerID="342045dc568ecac4114c95ec0358ef99796f41d82855f90db7b3e80db51cc128" Feb 23 13:45:41 crc kubenswrapper[4851]: I0223 13:45:41.925135 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:45:41 crc kubenswrapper[4851]: I0223 13:45:41.925885 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:45:41 crc kubenswrapper[4851]: I0223 13:45:41.925933 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:45:41 crc kubenswrapper[4851]: I0223 13:45:41.926737 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5968e7d1e8349088295a783895d1c083589df7bdc9df8e8ca121f9e40be40081"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:45:41 crc kubenswrapper[4851]: I0223 13:45:41.926801 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://5968e7d1e8349088295a783895d1c083589df7bdc9df8e8ca121f9e40be40081" gracePeriod=600 Feb 23 13:45:42 crc kubenswrapper[4851]: I0223 13:45:42.259766 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="5968e7d1e8349088295a783895d1c083589df7bdc9df8e8ca121f9e40be40081" exitCode=0 Feb 23 13:45:42 crc kubenswrapper[4851]: I0223 13:45:42.259872 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"5968e7d1e8349088295a783895d1c083589df7bdc9df8e8ca121f9e40be40081"} Feb 23 13:45:42 crc kubenswrapper[4851]: I0223 13:45:42.260216 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3"} Feb 23 13:45:42 crc kubenswrapper[4851]: I0223 13:45:42.260241 4851 scope.go:117] "RemoveContainer" containerID="50f4dbf9f457efca8da953a7763426bd2c2cc349b80ae54bd937fbfd1de42ba2" Feb 23 13:46:40 crc kubenswrapper[4851]: I0223 13:46:40.762872 4851 generic.go:334] "Generic (PLEG): container finished" podID="85e1b392-9aa6-4cd1-93b0-fa3587de47ac" containerID="ddfe4119eae040b3eab62f5f6c11d2db9ccf736dcc5a10d68c9edd13aa470668" exitCode=0 Feb 23 13:46:40 crc kubenswrapper[4851]: I0223 13:46:40.762972 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" event={"ID":"85e1b392-9aa6-4cd1-93b0-fa3587de47ac","Type":"ContainerDied","Data":"ddfe4119eae040b3eab62f5f6c11d2db9ccf736dcc5a10d68c9edd13aa470668"} Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.155445 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.330774 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-combined-ca-bundle\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.330858 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-0\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.330890 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mpwb\" (UniqueName: \"kubernetes.io/projected/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-kube-api-access-7mpwb\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.330919 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-1\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.330989 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-0\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.331027 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-3\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.331051 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-1\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.331081 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-extra-config-0\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.331115 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-ssh-key-openstack-edpm-ipam\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.331140 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-2\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.331173 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-inventory\") pod \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\" (UID: \"85e1b392-9aa6-4cd1-93b0-fa3587de47ac\") " Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.337029 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.342428 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-kube-api-access-7mpwb" (OuterVolumeSpecName: "kube-api-access-7mpwb") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "kube-api-access-7mpwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.358150 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.364361 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.364984 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.368910 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-inventory" (OuterVolumeSpecName: "inventory") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.369546 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.370147 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.371203 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.371239 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.371955 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "85e1b392-9aa6-4cd1-93b0-fa3587de47ac" (UID: "85e1b392-9aa6-4cd1-93b0-fa3587de47ac"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433248 4851 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433280 4851 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433290 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mpwb\" (UniqueName: \"kubernetes.io/projected/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-kube-api-access-7mpwb\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433299 4851 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433308 4851 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433318 4851 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433339 4851 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433348 4851 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433356 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433364 4851 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.433373 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e1b392-9aa6-4cd1-93b0-fa3587de47ac-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.778164 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" event={"ID":"85e1b392-9aa6-4cd1-93b0-fa3587de47ac","Type":"ContainerDied","Data":"46763f61581b1a54d29fc5547d5272165c8e2e5fac8706cb63ca45b9b5eb9a11"} Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.778200 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46763f61581b1a54d29fc5547d5272165c8e2e5fac8706cb63ca45b9b5eb9a11" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.778230 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-q95kg" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.882997 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45"] Feb 23 13:46:42 crc kubenswrapper[4851]: E0223 13:46:42.883497 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589db2c4-6120-4625-8977-77d13e5949f5" containerName="collect-profiles" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.883523 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="589db2c4-6120-4625-8977-77d13e5949f5" containerName="collect-profiles" Feb 23 13:46:42 crc kubenswrapper[4851]: E0223 13:46:42.883558 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e1b392-9aa6-4cd1-93b0-fa3587de47ac" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.883569 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e1b392-9aa6-4cd1-93b0-fa3587de47ac" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.883796 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e1b392-9aa6-4cd1-93b0-fa3587de47ac" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.883822 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="589db2c4-6120-4625-8977-77d13e5949f5" containerName="collect-profiles" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.884511 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.886677 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.887041 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.887484 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.887821 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.889651 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ql7kb" Feb 23 13:46:42 crc kubenswrapper[4851]: I0223 13:46:42.897182 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45"] Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.042703 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.042783 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.042827 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.042884 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.042933 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.042979 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.043023 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdcql\" (UniqueName: \"kubernetes.io/projected/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-kube-api-access-zdcql\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.145204 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.145265 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.145316 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.145390 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.145449 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.145504 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.145547 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdcql\" (UniqueName: \"kubernetes.io/projected/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-kube-api-access-zdcql\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.149080 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.150048 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.150604 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.150756 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.151235 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.152131 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.163634 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdcql\" (UniqueName: \"kubernetes.io/projected/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-kube-api-access-zdcql\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mhk45\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.202234 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.750485 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45"] Feb 23 13:46:43 crc kubenswrapper[4851]: I0223 13:46:43.788316 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" event={"ID":"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476","Type":"ContainerStarted","Data":"a33b8a8280c1cce65d5cb8848389a84467f4d70931cc98b45be32fb9233f9116"} Feb 23 13:46:44 crc kubenswrapper[4851]: I0223 13:46:44.797882 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" event={"ID":"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476","Type":"ContainerStarted","Data":"365c00b0cde7db0b4359862be19af1a8e1c414a7a5b679d930be20a411c2b18c"} Feb 23 13:46:44 crc kubenswrapper[4851]: I0223 13:46:44.822941 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" podStartSLOduration=2.331277814 podStartE2EDuration="2.822913513s" podCreationTimestamp="2026-02-23 13:46:42 +0000 UTC" firstStartedPulling="2026-02-23 13:46:43.75222632 +0000 UTC m=+2358.433930008" lastFinishedPulling="2026-02-23 13:46:44.243862029 +0000 UTC m=+2358.925565707" observedRunningTime="2026-02-23 13:46:44.814071893 +0000 UTC m=+2359.495775581" watchObservedRunningTime="2026-02-23 13:46:44.822913513 +0000 UTC m=+2359.504617201" Feb 23 13:48:11 crc kubenswrapper[4851]: I0223 13:48:11.925137 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:48:11 crc kubenswrapper[4851]: I0223 13:48:11.925777 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:48:41 crc kubenswrapper[4851]: I0223 13:48:41.924465 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:48:41 crc kubenswrapper[4851]: I0223 13:48:41.925037 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:48:44 crc kubenswrapper[4851]: I0223 13:48:44.780348 4851 generic.go:334] "Generic (PLEG): container finished" podID="ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" containerID="365c00b0cde7db0b4359862be19af1a8e1c414a7a5b679d930be20a411c2b18c" exitCode=0 Feb 23 13:48:44 crc kubenswrapper[4851]: I0223 13:48:44.780403 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" event={"ID":"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476","Type":"ContainerDied","Data":"365c00b0cde7db0b4359862be19af1a8e1c414a7a5b679d930be20a411c2b18c"} Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.164345 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.354027 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ssh-key-openstack-edpm-ipam\") pod \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.354070 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-telemetry-combined-ca-bundle\") pod \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.354132 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-2\") pod \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.354263 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-inventory\") pod \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.354435 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-0\") pod \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.354497 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdcql\" (UniqueName: \"kubernetes.io/projected/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-kube-api-access-zdcql\") pod \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.354526 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-1\") pod \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\" (UID: \"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476\") " Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.360028 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-kube-api-access-zdcql" (OuterVolumeSpecName: "kube-api-access-zdcql") pod "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" (UID: "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476"). InnerVolumeSpecName "kube-api-access-zdcql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.360736 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" (UID: "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.383768 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" (UID: "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.385021 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" (UID: "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.393716 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" (UID: "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.403829 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-inventory" (OuterVolumeSpecName: "inventory") pod "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" (UID: "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.419936 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" (UID: "ec787d1d-3f44-445b-a2ad-0d0b9ce7f476"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.456843 4851 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.456877 4851 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.456926 4851 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.456944 4851 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.456955 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdcql\" (UniqueName: \"kubernetes.io/projected/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-kube-api-access-zdcql\") on node \"crc\" DevicePath \"\"" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.456963 4851 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.456972 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec787d1d-3f44-445b-a2ad-0d0b9ce7f476-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.797055 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" event={"ID":"ec787d1d-3f44-445b-a2ad-0d0b9ce7f476","Type":"ContainerDied","Data":"a33b8a8280c1cce65d5cb8848389a84467f4d70931cc98b45be32fb9233f9116"} Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.797397 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a33b8a8280c1cce65d5cb8848389a84467f4d70931cc98b45be32fb9233f9116" Feb 23 13:48:46 crc kubenswrapper[4851]: I0223 13:48:46.797105 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mhk45" Feb 23 13:49:11 crc kubenswrapper[4851]: I0223 13:49:11.925053 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:49:11 crc kubenswrapper[4851]: I0223 13:49:11.926123 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:49:11 crc kubenswrapper[4851]: I0223 13:49:11.926212 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:49:11 crc kubenswrapper[4851]: I0223 13:49:11.927206 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:49:11 crc kubenswrapper[4851]: I0223 13:49:11.927286 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" gracePeriod=600 Feb 23 13:49:12 crc kubenswrapper[4851]: E0223 13:49:12.067286 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:49:13 crc kubenswrapper[4851]: I0223 13:49:13.036247 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" exitCode=0 Feb 23 13:49:13 crc kubenswrapper[4851]: I0223 13:49:13.036322 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3"} Feb 23 13:49:13 crc kubenswrapper[4851]: I0223 13:49:13.037655 4851 scope.go:117] "RemoveContainer" containerID="5968e7d1e8349088295a783895d1c083589df7bdc9df8e8ca121f9e40be40081" Feb 23 13:49:13 crc kubenswrapper[4851]: I0223 13:49:13.039373 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:49:13 crc kubenswrapper[4851]: E0223 13:49:13.039940 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:49:27 crc kubenswrapper[4851]: I0223 13:49:27.969545 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:49:27 crc kubenswrapper[4851]: E0223 13:49:27.970367 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:49:38 crc kubenswrapper[4851]: I0223 13:49:38.969179 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:49:38 crc kubenswrapper[4851]: E0223 13:49:38.969684 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.732235 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 13:49:42 crc kubenswrapper[4851]: E0223 13:49:42.733520 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.733549 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.733930 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec787d1d-3f44-445b-a2ad-0d0b9ce7f476" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.735246 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.737835 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.737938 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.739988 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pj67j" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.740071 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.770699 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.933590 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.933667 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.933713 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-config-data\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.933739 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.933770 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.933815 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srsqs\" (UniqueName: \"kubernetes.io/projected/85d7dda0-1545-4b56-9694-c704cfec078c-kube-api-access-srsqs\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.934373 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.934671 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:42 crc kubenswrapper[4851]: I0223 13:49:42.934760 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.037552 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.037655 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.037718 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-config-data\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.037758 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.037806 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.037877 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srsqs\" (UniqueName: \"kubernetes.io/projected/85d7dda0-1545-4b56-9694-c704cfec078c-kube-api-access-srsqs\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.038036 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.038115 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.038160 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.038269 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.038985 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.039013 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.039893 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-config-data\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.040311 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.047471 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.048588 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.053263 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.068236 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srsqs\" (UniqueName: \"kubernetes.io/projected/85d7dda0-1545-4b56-9694-c704cfec078c-kube-api-access-srsqs\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.079967 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.377824 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.694919 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 13:49:43 crc kubenswrapper[4851]: I0223 13:49:43.701704 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:49:44 crc kubenswrapper[4851]: I0223 13:49:44.340493 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"85d7dda0-1545-4b56-9694-c704cfec078c","Type":"ContainerStarted","Data":"20fe9faee03a8821f38d089b187dbe5603e05c2c6af08453f0bd9d424e4130b1"} Feb 23 13:49:49 crc kubenswrapper[4851]: I0223 13:49:49.969792 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:49:49 crc kubenswrapper[4851]: E0223 13:49:49.970360 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:50:03 crc kubenswrapper[4851]: I0223 13:50:03.969205 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:50:03 crc kubenswrapper[4851]: E0223 13:50:03.969955 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:50:13 crc kubenswrapper[4851]: E0223 13:50:13.445397 4851 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 23 13:50:13 crc kubenswrapper[4851]: E0223 13:50:13.446656 4851 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-srsqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(85d7dda0-1545-4b56-9694-c704cfec078c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 13:50:13 crc kubenswrapper[4851]: E0223 13:50:13.447948 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="85d7dda0-1545-4b56-9694-c704cfec078c" Feb 23 13:50:13 crc kubenswrapper[4851]: E0223 13:50:13.623356 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="85d7dda0-1545-4b56-9694-c704cfec078c" Feb 23 13:50:17 crc kubenswrapper[4851]: I0223 13:50:17.968317 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:50:17 crc kubenswrapper[4851]: E0223 13:50:17.969093 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:50:28 crc kubenswrapper[4851]: I0223 13:50:28.438473 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 23 13:50:29 crc kubenswrapper[4851]: I0223 13:50:29.769795 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"85d7dda0-1545-4b56-9694-c704cfec078c","Type":"ContainerStarted","Data":"5c865590b2de663985b68dd807e9a832d15255b9c543380aa1c9cd27a462015c"} Feb 23 13:50:29 crc kubenswrapper[4851]: I0223 13:50:29.796130 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.061708912 podStartE2EDuration="48.79610915s" podCreationTimestamp="2026-02-23 13:49:41 +0000 UTC" firstStartedPulling="2026-02-23 13:49:43.701488753 +0000 UTC m=+2538.383192431" lastFinishedPulling="2026-02-23 13:50:28.435888981 +0000 UTC m=+2583.117592669" observedRunningTime="2026-02-23 13:50:29.786450526 +0000 UTC m=+2584.468154214" watchObservedRunningTime="2026-02-23 13:50:29.79610915 +0000 UTC m=+2584.477812828" Feb 23 13:50:31 crc kubenswrapper[4851]: I0223 13:50:31.969354 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:50:31 crc kubenswrapper[4851]: E0223 13:50:31.970016 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:50:45 crc kubenswrapper[4851]: I0223 13:50:45.975014 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:50:45 crc kubenswrapper[4851]: E0223 13:50:45.975932 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:50:56 crc kubenswrapper[4851]: I0223 13:50:56.968724 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:50:56 crc kubenswrapper[4851]: E0223 13:50:56.970546 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:51:09 crc kubenswrapper[4851]: I0223 13:51:09.969413 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:51:09 crc kubenswrapper[4851]: E0223 13:51:09.970156 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:51:20 crc kubenswrapper[4851]: I0223 13:51:20.970061 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:51:20 crc kubenswrapper[4851]: E0223 13:51:20.970676 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:51:34 crc kubenswrapper[4851]: I0223 13:51:34.969313 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:51:34 crc kubenswrapper[4851]: E0223 13:51:34.970059 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:51:48 crc kubenswrapper[4851]: I0223 13:51:48.968201 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:51:48 crc kubenswrapper[4851]: E0223 13:51:48.969057 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:52:02 crc kubenswrapper[4851]: I0223 13:52:02.969226 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:52:02 crc kubenswrapper[4851]: E0223 13:52:02.969965 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:52:14 crc kubenswrapper[4851]: I0223 13:52:14.968767 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:52:14 crc kubenswrapper[4851]: E0223 13:52:14.969541 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:52:28 crc kubenswrapper[4851]: I0223 13:52:28.968995 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:52:28 crc kubenswrapper[4851]: E0223 13:52:28.969874 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:52:43 crc kubenswrapper[4851]: I0223 13:52:43.968785 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:52:43 crc kubenswrapper[4851]: E0223 13:52:43.969546 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:52:56 crc kubenswrapper[4851]: I0223 13:52:56.969155 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:52:56 crc kubenswrapper[4851]: E0223 13:52:56.970095 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:53:08 crc kubenswrapper[4851]: I0223 13:53:08.969910 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:53:08 crc kubenswrapper[4851]: E0223 13:53:08.972230 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:53:21 crc kubenswrapper[4851]: I0223 13:53:21.969204 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:53:21 crc kubenswrapper[4851]: E0223 13:53:21.969968 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.785902 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nnmpr"] Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.790305 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.801574 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nnmpr"] Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.824348 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-utilities\") pod \"community-operators-nnmpr\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.824416 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-catalog-content\") pod \"community-operators-nnmpr\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.824752 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29b8\" (UniqueName: \"kubernetes.io/projected/f8ccb24a-69b6-4a76-b6f4-142047795928-kube-api-access-f29b8\") pod \"community-operators-nnmpr\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.926017 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f29b8\" (UniqueName: \"kubernetes.io/projected/f8ccb24a-69b6-4a76-b6f4-142047795928-kube-api-access-f29b8\") pod \"community-operators-nnmpr\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.926114 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-utilities\") pod \"community-operators-nnmpr\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.926160 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-catalog-content\") pod \"community-operators-nnmpr\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.926790 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-catalog-content\") pod \"community-operators-nnmpr\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.927017 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-utilities\") pod \"community-operators-nnmpr\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:32 crc kubenswrapper[4851]: I0223 13:53:32.952864 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f29b8\" (UniqueName: \"kubernetes.io/projected/f8ccb24a-69b6-4a76-b6f4-142047795928-kube-api-access-f29b8\") pod \"community-operators-nnmpr\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:33 crc kubenswrapper[4851]: I0223 13:53:33.127499 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:33 crc kubenswrapper[4851]: I0223 13:53:33.654503 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nnmpr"] Feb 23 13:53:33 crc kubenswrapper[4851]: I0223 13:53:33.722861 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmpr" event={"ID":"f8ccb24a-69b6-4a76-b6f4-142047795928","Type":"ContainerStarted","Data":"d0c84f7e80bb2aa75258248e05c56d981eadb19db757c1602de284d854801b71"} Feb 23 13:53:33 crc kubenswrapper[4851]: I0223 13:53:33.969292 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:53:33 crc kubenswrapper[4851]: E0223 13:53:33.969591 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:53:34 crc kubenswrapper[4851]: I0223 13:53:34.733683 4851 generic.go:334] "Generic (PLEG): container finished" podID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerID="e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2" exitCode=0 Feb 23 13:53:34 crc kubenswrapper[4851]: I0223 13:53:34.734164 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmpr" event={"ID":"f8ccb24a-69b6-4a76-b6f4-142047795928","Type":"ContainerDied","Data":"e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2"} Feb 23 13:53:35 crc kubenswrapper[4851]: I0223 13:53:35.746506 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmpr" event={"ID":"f8ccb24a-69b6-4a76-b6f4-142047795928","Type":"ContainerStarted","Data":"7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c"} Feb 23 13:53:36 crc kubenswrapper[4851]: I0223 13:53:36.758354 4851 generic.go:334] "Generic (PLEG): container finished" podID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerID="7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c" exitCode=0 Feb 23 13:53:36 crc kubenswrapper[4851]: I0223 13:53:36.758410 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmpr" event={"ID":"f8ccb24a-69b6-4a76-b6f4-142047795928","Type":"ContainerDied","Data":"7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c"} Feb 23 13:53:37 crc kubenswrapper[4851]: I0223 13:53:37.769203 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmpr" event={"ID":"f8ccb24a-69b6-4a76-b6f4-142047795928","Type":"ContainerStarted","Data":"fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f"} Feb 23 13:53:37 crc kubenswrapper[4851]: I0223 13:53:37.792186 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nnmpr" podStartSLOduration=3.004394705 podStartE2EDuration="5.792160502s" podCreationTimestamp="2026-02-23 13:53:32 +0000 UTC" firstStartedPulling="2026-02-23 13:53:34.737296932 +0000 UTC m=+2769.419000610" lastFinishedPulling="2026-02-23 13:53:37.525062729 +0000 UTC m=+2772.206766407" observedRunningTime="2026-02-23 13:53:37.785242516 +0000 UTC m=+2772.466946214" watchObservedRunningTime="2026-02-23 13:53:37.792160502 +0000 UTC m=+2772.473864200" Feb 23 13:53:43 crc kubenswrapper[4851]: I0223 13:53:43.128051 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:43 crc kubenswrapper[4851]: I0223 13:53:43.128624 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:43 crc kubenswrapper[4851]: I0223 13:53:43.175641 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:43 crc kubenswrapper[4851]: I0223 13:53:43.858111 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:43 crc kubenswrapper[4851]: I0223 13:53:43.907061 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nnmpr"] Feb 23 13:53:45 crc kubenswrapper[4851]: I0223 13:53:45.828591 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nnmpr" podUID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerName="registry-server" containerID="cri-o://fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f" gracePeriod=2 Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.351646 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.505445 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f29b8\" (UniqueName: \"kubernetes.io/projected/f8ccb24a-69b6-4a76-b6f4-142047795928-kube-api-access-f29b8\") pod \"f8ccb24a-69b6-4a76-b6f4-142047795928\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.505597 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-catalog-content\") pod \"f8ccb24a-69b6-4a76-b6f4-142047795928\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.505708 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-utilities\") pod \"f8ccb24a-69b6-4a76-b6f4-142047795928\" (UID: \"f8ccb24a-69b6-4a76-b6f4-142047795928\") " Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.507237 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-utilities" (OuterVolumeSpecName: "utilities") pod "f8ccb24a-69b6-4a76-b6f4-142047795928" (UID: "f8ccb24a-69b6-4a76-b6f4-142047795928"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.510914 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ccb24a-69b6-4a76-b6f4-142047795928-kube-api-access-f29b8" (OuterVolumeSpecName: "kube-api-access-f29b8") pod "f8ccb24a-69b6-4a76-b6f4-142047795928" (UID: "f8ccb24a-69b6-4a76-b6f4-142047795928"). InnerVolumeSpecName "kube-api-access-f29b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.558141 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8ccb24a-69b6-4a76-b6f4-142047795928" (UID: "f8ccb24a-69b6-4a76-b6f4-142047795928"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.608921 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f29b8\" (UniqueName: \"kubernetes.io/projected/f8ccb24a-69b6-4a76-b6f4-142047795928-kube-api-access-f29b8\") on node \"crc\" DevicePath \"\"" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.608976 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.608987 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8ccb24a-69b6-4a76-b6f4-142047795928-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.840662 4851 generic.go:334] "Generic (PLEG): container finished" podID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerID="fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f" exitCode=0 Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.840717 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmpr" event={"ID":"f8ccb24a-69b6-4a76-b6f4-142047795928","Type":"ContainerDied","Data":"fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f"} Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.841008 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nnmpr" event={"ID":"f8ccb24a-69b6-4a76-b6f4-142047795928","Type":"ContainerDied","Data":"d0c84f7e80bb2aa75258248e05c56d981eadb19db757c1602de284d854801b71"} Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.841033 4851 scope.go:117] "RemoveContainer" containerID="fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.840739 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nnmpr" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.883366 4851 scope.go:117] "RemoveContainer" containerID="7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.892920 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nnmpr"] Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.900632 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nnmpr"] Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.911294 4851 scope.go:117] "RemoveContainer" containerID="e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.965290 4851 scope.go:117] "RemoveContainer" containerID="fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f" Feb 23 13:53:46 crc kubenswrapper[4851]: E0223 13:53:46.965872 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f\": container with ID starting with fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f not found: ID does not exist" containerID="fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.965919 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f"} err="failed to get container status \"fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f\": rpc error: code = NotFound desc = could not find container \"fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f\": container with ID starting with fe9dea7038a7875a6942c1b8a9f628b2682ae8fb9d85d102ed62a00dd373e20f not found: ID does not exist" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.965943 4851 scope.go:117] "RemoveContainer" containerID="7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c" Feb 23 13:53:46 crc kubenswrapper[4851]: E0223 13:53:46.966239 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c\": container with ID starting with 7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c not found: ID does not exist" containerID="7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.966274 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c"} err="failed to get container status \"7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c\": rpc error: code = NotFound desc = could not find container \"7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c\": container with ID starting with 7892761b62d91c71207e79e481a7964cc3e6d7eb2283f94b8e795b3dfb76da2c not found: ID does not exist" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.966294 4851 scope.go:117] "RemoveContainer" containerID="e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2" Feb 23 13:53:46 crc kubenswrapper[4851]: E0223 13:53:46.966630 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2\": container with ID starting with e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2 not found: ID does not exist" containerID="e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.966672 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2"} err="failed to get container status \"e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2\": rpc error: code = NotFound desc = could not find container \"e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2\": container with ID starting with e6845f6b63ee08f8050f85a763ba2c79a58ff7d3d305d343ea5c1816e739f5c2 not found: ID does not exist" Feb 23 13:53:46 crc kubenswrapper[4851]: I0223 13:53:46.968567 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:53:46 crc kubenswrapper[4851]: E0223 13:53:46.968884 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:53:47 crc kubenswrapper[4851]: I0223 13:53:47.979681 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ccb24a-69b6-4a76-b6f4-142047795928" path="/var/lib/kubelet/pods/f8ccb24a-69b6-4a76-b6f4-142047795928/volumes" Feb 23 13:53:50 crc kubenswrapper[4851]: I0223 13:53:50.949186 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zmnkx"] Feb 23 13:53:50 crc kubenswrapper[4851]: E0223 13:53:50.950147 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerName="extract-utilities" Feb 23 13:53:50 crc kubenswrapper[4851]: I0223 13:53:50.950161 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerName="extract-utilities" Feb 23 13:53:50 crc kubenswrapper[4851]: E0223 13:53:50.950183 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerName="extract-content" Feb 23 13:53:50 crc kubenswrapper[4851]: I0223 13:53:50.950190 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerName="extract-content" Feb 23 13:53:50 crc kubenswrapper[4851]: E0223 13:53:50.950214 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerName="registry-server" Feb 23 13:53:50 crc kubenswrapper[4851]: I0223 13:53:50.950220 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerName="registry-server" Feb 23 13:53:50 crc kubenswrapper[4851]: I0223 13:53:50.950400 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ccb24a-69b6-4a76-b6f4-142047795928" containerName="registry-server" Feb 23 13:53:50 crc kubenswrapper[4851]: I0223 13:53:50.951876 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:50 crc kubenswrapper[4851]: I0223 13:53:50.963860 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmnkx"] Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.086532 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-utilities\") pod \"certified-operators-zmnkx\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.086643 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd56t\" (UniqueName: \"kubernetes.io/projected/b663d433-9f75-4377-8b5b-a8da757f37dc-kube-api-access-hd56t\") pod \"certified-operators-zmnkx\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.086931 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-catalog-content\") pod \"certified-operators-zmnkx\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.194364 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-catalog-content\") pod \"certified-operators-zmnkx\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.194487 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-utilities\") pod \"certified-operators-zmnkx\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.194535 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd56t\" (UniqueName: \"kubernetes.io/projected/b663d433-9f75-4377-8b5b-a8da757f37dc-kube-api-access-hd56t\") pod \"certified-operators-zmnkx\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.194825 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-catalog-content\") pod \"certified-operators-zmnkx\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.195086 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-utilities\") pod \"certified-operators-zmnkx\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.215855 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd56t\" (UniqueName: \"kubernetes.io/projected/b663d433-9f75-4377-8b5b-a8da757f37dc-kube-api-access-hd56t\") pod \"certified-operators-zmnkx\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.281436 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.758636 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmnkx"] Feb 23 13:53:51 crc kubenswrapper[4851]: I0223 13:53:51.883982 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmnkx" event={"ID":"b663d433-9f75-4377-8b5b-a8da757f37dc","Type":"ContainerStarted","Data":"6a2b6275ea7c29d323075ce856077595dcf5f54b2d56704b4f2bf62fc4fbfc7d"} Feb 23 13:53:52 crc kubenswrapper[4851]: I0223 13:53:52.895137 4851 generic.go:334] "Generic (PLEG): container finished" podID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerID="3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49" exitCode=0 Feb 23 13:53:52 crc kubenswrapper[4851]: I0223 13:53:52.895741 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmnkx" event={"ID":"b663d433-9f75-4377-8b5b-a8da757f37dc","Type":"ContainerDied","Data":"3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49"} Feb 23 13:53:54 crc kubenswrapper[4851]: I0223 13:53:54.919173 4851 generic.go:334] "Generic (PLEG): container finished" podID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerID="4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f" exitCode=0 Feb 23 13:53:54 crc kubenswrapper[4851]: I0223 13:53:54.919695 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmnkx" event={"ID":"b663d433-9f75-4377-8b5b-a8da757f37dc","Type":"ContainerDied","Data":"4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f"} Feb 23 13:53:55 crc kubenswrapper[4851]: I0223 13:53:55.931216 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmnkx" event={"ID":"b663d433-9f75-4377-8b5b-a8da757f37dc","Type":"ContainerStarted","Data":"aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388"} Feb 23 13:53:55 crc kubenswrapper[4851]: I0223 13:53:55.952626 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zmnkx" podStartSLOduration=3.5464082340000003 podStartE2EDuration="5.952605426s" podCreationTimestamp="2026-02-23 13:53:50 +0000 UTC" firstStartedPulling="2026-02-23 13:53:52.897819521 +0000 UTC m=+2787.579523199" lastFinishedPulling="2026-02-23 13:53:55.304016703 +0000 UTC m=+2789.985720391" observedRunningTime="2026-02-23 13:53:55.948360786 +0000 UTC m=+2790.630064484" watchObservedRunningTime="2026-02-23 13:53:55.952605426 +0000 UTC m=+2790.634309104" Feb 23 13:54:01 crc kubenswrapper[4851]: I0223 13:54:01.282449 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:54:01 crc kubenswrapper[4851]: I0223 13:54:01.283050 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:54:01 crc kubenswrapper[4851]: I0223 13:54:01.342831 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:54:01 crc kubenswrapper[4851]: I0223 13:54:01.968893 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:54:01 crc kubenswrapper[4851]: E0223 13:54:01.969584 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 13:54:02 crc kubenswrapper[4851]: I0223 13:54:02.040859 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:54:02 crc kubenswrapper[4851]: I0223 13:54:02.096811 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmnkx"] Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.015602 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zmnkx" podUID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerName="registry-server" containerID="cri-o://aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388" gracePeriod=2 Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.494019 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.578736 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-utilities\") pod \"b663d433-9f75-4377-8b5b-a8da757f37dc\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.578817 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-catalog-content\") pod \"b663d433-9f75-4377-8b5b-a8da757f37dc\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.578932 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd56t\" (UniqueName: \"kubernetes.io/projected/b663d433-9f75-4377-8b5b-a8da757f37dc-kube-api-access-hd56t\") pod \"b663d433-9f75-4377-8b5b-a8da757f37dc\" (UID: \"b663d433-9f75-4377-8b5b-a8da757f37dc\") " Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.579397 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-utilities" (OuterVolumeSpecName: "utilities") pod "b663d433-9f75-4377-8b5b-a8da757f37dc" (UID: "b663d433-9f75-4377-8b5b-a8da757f37dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.579590 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.584425 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b663d433-9f75-4377-8b5b-a8da757f37dc-kube-api-access-hd56t" (OuterVolumeSpecName: "kube-api-access-hd56t") pod "b663d433-9f75-4377-8b5b-a8da757f37dc" (UID: "b663d433-9f75-4377-8b5b-a8da757f37dc"). InnerVolumeSpecName "kube-api-access-hd56t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.627844 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b663d433-9f75-4377-8b5b-a8da757f37dc" (UID: "b663d433-9f75-4377-8b5b-a8da757f37dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.681705 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd56t\" (UniqueName: \"kubernetes.io/projected/b663d433-9f75-4377-8b5b-a8da757f37dc-kube-api-access-hd56t\") on node \"crc\" DevicePath \"\"" Feb 23 13:54:04 crc kubenswrapper[4851]: I0223 13:54:04.681743 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b663d433-9f75-4377-8b5b-a8da757f37dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.024470 4851 generic.go:334] "Generic (PLEG): container finished" podID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerID="aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388" exitCode=0 Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.024535 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmnkx" event={"ID":"b663d433-9f75-4377-8b5b-a8da757f37dc","Type":"ContainerDied","Data":"aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388"} Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.024878 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmnkx" event={"ID":"b663d433-9f75-4377-8b5b-a8da757f37dc","Type":"ContainerDied","Data":"6a2b6275ea7c29d323075ce856077595dcf5f54b2d56704b4f2bf62fc4fbfc7d"} Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.024905 4851 scope.go:117] "RemoveContainer" containerID="aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.024560 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmnkx" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.048646 4851 scope.go:117] "RemoveContainer" containerID="4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.065779 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmnkx"] Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.073967 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zmnkx"] Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.076248 4851 scope.go:117] "RemoveContainer" containerID="3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.133360 4851 scope.go:117] "RemoveContainer" containerID="aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388" Feb 23 13:54:05 crc kubenswrapper[4851]: E0223 13:54:05.135464 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388\": container with ID starting with aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388 not found: ID does not exist" containerID="aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.135597 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388"} err="failed to get container status \"aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388\": rpc error: code = NotFound desc = could not find container \"aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388\": container with ID starting with aa57b4249b53e08701fdcdc4dd893e920d51e4af96860a333fed53d24b023388 not found: ID does not exist" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.135686 4851 scope.go:117] "RemoveContainer" containerID="4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f" Feb 23 13:54:05 crc kubenswrapper[4851]: E0223 13:54:05.136155 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f\": container with ID starting with 4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f not found: ID does not exist" containerID="4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.136211 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f"} err="failed to get container status \"4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f\": rpc error: code = NotFound desc = could not find container \"4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f\": container with ID starting with 4253f0f0a082044fa453502516be73888a05cb41670dd0ab90f1ccef21e4ec4f not found: ID does not exist" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.136236 4851 scope.go:117] "RemoveContainer" containerID="3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49" Feb 23 13:54:05 crc kubenswrapper[4851]: E0223 13:54:05.136623 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49\": container with ID starting with 3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49 not found: ID does not exist" containerID="3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.136671 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49"} err="failed to get container status \"3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49\": rpc error: code = NotFound desc = could not find container \"3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49\": container with ID starting with 3ff9b64edd5c3c7612c3f3181b4c1085e819cab3d088dacc4ed1b31071256d49 not found: ID does not exist" Feb 23 13:54:05 crc kubenswrapper[4851]: I0223 13:54:05.977743 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b663d433-9f75-4377-8b5b-a8da757f37dc" path="/var/lib/kubelet/pods/b663d433-9f75-4377-8b5b-a8da757f37dc/volumes" Feb 23 13:54:12 crc kubenswrapper[4851]: I0223 13:54:12.969001 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 13:54:14 crc kubenswrapper[4851]: I0223 13:54:14.110089 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"9f5f19ff2b473e08867cb77ac287711f91c24dca567937ee2469e13f633b2f8c"} Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.493273 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s47h2"] Feb 23 13:56:32 crc kubenswrapper[4851]: E0223 13:56:32.494250 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerName="registry-server" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.494263 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerName="registry-server" Feb 23 13:56:32 crc kubenswrapper[4851]: E0223 13:56:32.494282 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerName="extract-content" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.494289 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerName="extract-content" Feb 23 13:56:32 crc kubenswrapper[4851]: E0223 13:56:32.494304 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerName="extract-utilities" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.494310 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerName="extract-utilities" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.494512 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b663d433-9f75-4377-8b5b-a8da757f37dc" containerName="registry-server" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.495776 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.508568 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s47h2"] Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.585405 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-catalog-content\") pod \"redhat-marketplace-s47h2\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.585484 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkw82\" (UniqueName: \"kubernetes.io/projected/05fb6842-a7ad-4531-b76e-9c42e5071289-kube-api-access-wkw82\") pod \"redhat-marketplace-s47h2\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.585541 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-utilities\") pod \"redhat-marketplace-s47h2\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.687118 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-catalog-content\") pod \"redhat-marketplace-s47h2\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.687620 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkw82\" (UniqueName: \"kubernetes.io/projected/05fb6842-a7ad-4531-b76e-9c42e5071289-kube-api-access-wkw82\") pod \"redhat-marketplace-s47h2\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.687709 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-utilities\") pod \"redhat-marketplace-s47h2\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.687710 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-catalog-content\") pod \"redhat-marketplace-s47h2\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.688120 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-utilities\") pod \"redhat-marketplace-s47h2\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.707049 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkw82\" (UniqueName: \"kubernetes.io/projected/05fb6842-a7ad-4531-b76e-9c42e5071289-kube-api-access-wkw82\") pod \"redhat-marketplace-s47h2\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:32 crc kubenswrapper[4851]: I0223 13:56:32.859207 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:33 crc kubenswrapper[4851]: I0223 13:56:33.309642 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s47h2"] Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.289277 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f94nx"] Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.291705 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.305116 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f94nx"] Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.343895 4851 generic.go:334] "Generic (PLEG): container finished" podID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerID="f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3" exitCode=0 Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.343943 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s47h2" event={"ID":"05fb6842-a7ad-4531-b76e-9c42e5071289","Type":"ContainerDied","Data":"f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3"} Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.343969 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s47h2" event={"ID":"05fb6842-a7ad-4531-b76e-9c42e5071289","Type":"ContainerStarted","Data":"d59628f4409d4b8985da11fcb85b7cf889c3ceda724c3383cb3b7ce983d8c85f"} Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.347077 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.412084 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-catalog-content\") pod \"redhat-operators-f94nx\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.412138 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-utilities\") pod \"redhat-operators-f94nx\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.412269 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gfrf\" (UniqueName: \"kubernetes.io/projected/490f2484-afc5-4187-acc6-575d916ca010-kube-api-access-2gfrf\") pod \"redhat-operators-f94nx\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.513808 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-catalog-content\") pod \"redhat-operators-f94nx\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.513858 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-utilities\") pod \"redhat-operators-f94nx\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.513929 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gfrf\" (UniqueName: \"kubernetes.io/projected/490f2484-afc5-4187-acc6-575d916ca010-kube-api-access-2gfrf\") pod \"redhat-operators-f94nx\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.514296 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-catalog-content\") pod \"redhat-operators-f94nx\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.514620 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-utilities\") pod \"redhat-operators-f94nx\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.536214 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gfrf\" (UniqueName: \"kubernetes.io/projected/490f2484-afc5-4187-acc6-575d916ca010-kube-api-access-2gfrf\") pod \"redhat-operators-f94nx\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:34 crc kubenswrapper[4851]: I0223 13:56:34.615182 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:35 crc kubenswrapper[4851]: I0223 13:56:35.109447 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f94nx"] Feb 23 13:56:35 crc kubenswrapper[4851]: W0223 13:56:35.113128 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod490f2484_afc5_4187_acc6_575d916ca010.slice/crio-ebf72cadde898ec9a95a7ec63bfdecf01c953fcda82d3c6191520d401d0f57d1 WatchSource:0}: Error finding container ebf72cadde898ec9a95a7ec63bfdecf01c953fcda82d3c6191520d401d0f57d1: Status 404 returned error can't find the container with id ebf72cadde898ec9a95a7ec63bfdecf01c953fcda82d3c6191520d401d0f57d1 Feb 23 13:56:35 crc kubenswrapper[4851]: I0223 13:56:35.359345 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s47h2" event={"ID":"05fb6842-a7ad-4531-b76e-9c42e5071289","Type":"ContainerStarted","Data":"4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b"} Feb 23 13:56:35 crc kubenswrapper[4851]: I0223 13:56:35.360875 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f94nx" event={"ID":"490f2484-afc5-4187-acc6-575d916ca010","Type":"ContainerStarted","Data":"50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d"} Feb 23 13:56:35 crc kubenswrapper[4851]: I0223 13:56:35.360923 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f94nx" event={"ID":"490f2484-afc5-4187-acc6-575d916ca010","Type":"ContainerStarted","Data":"ebf72cadde898ec9a95a7ec63bfdecf01c953fcda82d3c6191520d401d0f57d1"} Feb 23 13:56:36 crc kubenswrapper[4851]: I0223 13:56:36.369229 4851 generic.go:334] "Generic (PLEG): container finished" podID="490f2484-afc5-4187-acc6-575d916ca010" containerID="50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d" exitCode=0 Feb 23 13:56:36 crc kubenswrapper[4851]: I0223 13:56:36.369401 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f94nx" event={"ID":"490f2484-afc5-4187-acc6-575d916ca010","Type":"ContainerDied","Data":"50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d"} Feb 23 13:56:36 crc kubenswrapper[4851]: I0223 13:56:36.372539 4851 generic.go:334] "Generic (PLEG): container finished" podID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerID="4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b" exitCode=0 Feb 23 13:56:36 crc kubenswrapper[4851]: I0223 13:56:36.372580 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s47h2" event={"ID":"05fb6842-a7ad-4531-b76e-9c42e5071289","Type":"ContainerDied","Data":"4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b"} Feb 23 13:56:38 crc kubenswrapper[4851]: I0223 13:56:38.394388 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s47h2" event={"ID":"05fb6842-a7ad-4531-b76e-9c42e5071289","Type":"ContainerStarted","Data":"c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177"} Feb 23 13:56:38 crc kubenswrapper[4851]: I0223 13:56:38.396897 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f94nx" event={"ID":"490f2484-afc5-4187-acc6-575d916ca010","Type":"ContainerStarted","Data":"74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e"} Feb 23 13:56:38 crc kubenswrapper[4851]: I0223 13:56:38.421460 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s47h2" podStartSLOduration=3.610354213 podStartE2EDuration="6.421431549s" podCreationTimestamp="2026-02-23 13:56:32 +0000 UTC" firstStartedPulling="2026-02-23 13:56:34.346844002 +0000 UTC m=+2949.028547680" lastFinishedPulling="2026-02-23 13:56:37.157921338 +0000 UTC m=+2951.839625016" observedRunningTime="2026-02-23 13:56:38.418798455 +0000 UTC m=+2953.100502123" watchObservedRunningTime="2026-02-23 13:56:38.421431549 +0000 UTC m=+2953.103135237" Feb 23 13:56:40 crc kubenswrapper[4851]: I0223 13:56:40.425629 4851 generic.go:334] "Generic (PLEG): container finished" podID="490f2484-afc5-4187-acc6-575d916ca010" containerID="74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e" exitCode=0 Feb 23 13:56:40 crc kubenswrapper[4851]: I0223 13:56:40.425729 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f94nx" event={"ID":"490f2484-afc5-4187-acc6-575d916ca010","Type":"ContainerDied","Data":"74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e"} Feb 23 13:56:41 crc kubenswrapper[4851]: I0223 13:56:41.440599 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f94nx" event={"ID":"490f2484-afc5-4187-acc6-575d916ca010","Type":"ContainerStarted","Data":"0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d"} Feb 23 13:56:41 crc kubenswrapper[4851]: I0223 13:56:41.474141 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f94nx" podStartSLOduration=3.042417898 podStartE2EDuration="7.474115246s" podCreationTimestamp="2026-02-23 13:56:34 +0000 UTC" firstStartedPulling="2026-02-23 13:56:36.372036439 +0000 UTC m=+2951.053740117" lastFinishedPulling="2026-02-23 13:56:40.803733787 +0000 UTC m=+2955.485437465" observedRunningTime="2026-02-23 13:56:41.469150696 +0000 UTC m=+2956.150854384" watchObservedRunningTime="2026-02-23 13:56:41.474115246 +0000 UTC m=+2956.155818924" Feb 23 13:56:41 crc kubenswrapper[4851]: I0223 13:56:41.924552 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:56:41 crc kubenswrapper[4851]: I0223 13:56:41.925369 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:56:42 crc kubenswrapper[4851]: I0223 13:56:42.860461 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:42 crc kubenswrapper[4851]: I0223 13:56:42.860916 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:42 crc kubenswrapper[4851]: I0223 13:56:42.908835 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:43 crc kubenswrapper[4851]: I0223 13:56:43.505491 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:44 crc kubenswrapper[4851]: I0223 13:56:44.088636 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s47h2"] Feb 23 13:56:44 crc kubenswrapper[4851]: I0223 13:56:44.615841 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:44 crc kubenswrapper[4851]: I0223 13:56:44.616693 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:45 crc kubenswrapper[4851]: I0223 13:56:45.472119 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s47h2" podUID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerName="registry-server" containerID="cri-o://c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177" gracePeriod=2 Feb 23 13:56:45 crc kubenswrapper[4851]: I0223 13:56:45.664270 4851 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f94nx" podUID="490f2484-afc5-4187-acc6-575d916ca010" containerName="registry-server" probeResult="failure" output=< Feb 23 13:56:45 crc kubenswrapper[4851]: timeout: failed to connect service ":50051" within 1s Feb 23 13:56:45 crc kubenswrapper[4851]: > Feb 23 13:56:45 crc kubenswrapper[4851]: I0223 13:56:45.945096 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.137725 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkw82\" (UniqueName: \"kubernetes.io/projected/05fb6842-a7ad-4531-b76e-9c42e5071289-kube-api-access-wkw82\") pod \"05fb6842-a7ad-4531-b76e-9c42e5071289\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.137802 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-utilities\") pod \"05fb6842-a7ad-4531-b76e-9c42e5071289\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.137837 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-catalog-content\") pod \"05fb6842-a7ad-4531-b76e-9c42e5071289\" (UID: \"05fb6842-a7ad-4531-b76e-9c42e5071289\") " Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.139050 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-utilities" (OuterVolumeSpecName: "utilities") pod "05fb6842-a7ad-4531-b76e-9c42e5071289" (UID: "05fb6842-a7ad-4531-b76e-9c42e5071289"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.144169 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fb6842-a7ad-4531-b76e-9c42e5071289-kube-api-access-wkw82" (OuterVolumeSpecName: "kube-api-access-wkw82") pod "05fb6842-a7ad-4531-b76e-9c42e5071289" (UID: "05fb6842-a7ad-4531-b76e-9c42e5071289"). InnerVolumeSpecName "kube-api-access-wkw82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.160902 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05fb6842-a7ad-4531-b76e-9c42e5071289" (UID: "05fb6842-a7ad-4531-b76e-9c42e5071289"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.240623 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.240661 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05fb6842-a7ad-4531-b76e-9c42e5071289-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.240673 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkw82\" (UniqueName: \"kubernetes.io/projected/05fb6842-a7ad-4531-b76e-9c42e5071289-kube-api-access-wkw82\") on node \"crc\" DevicePath \"\"" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.482769 4851 generic.go:334] "Generic (PLEG): container finished" podID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerID="c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177" exitCode=0 Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.482822 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s47h2" event={"ID":"05fb6842-a7ad-4531-b76e-9c42e5071289","Type":"ContainerDied","Data":"c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177"} Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.482904 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s47h2" event={"ID":"05fb6842-a7ad-4531-b76e-9c42e5071289","Type":"ContainerDied","Data":"d59628f4409d4b8985da11fcb85b7cf889c3ceda724c3383cb3b7ce983d8c85f"} Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.482927 4851 scope.go:117] "RemoveContainer" containerID="c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.482851 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s47h2" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.505874 4851 scope.go:117] "RemoveContainer" containerID="4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.533018 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s47h2"] Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.539534 4851 scope.go:117] "RemoveContainer" containerID="f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.541956 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s47h2"] Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.598411 4851 scope.go:117] "RemoveContainer" containerID="c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177" Feb 23 13:56:46 crc kubenswrapper[4851]: E0223 13:56:46.598891 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177\": container with ID starting with c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177 not found: ID does not exist" containerID="c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.598946 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177"} err="failed to get container status \"c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177\": rpc error: code = NotFound desc = could not find container \"c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177\": container with ID starting with c64fa8e07c711d48dc6ba47cc463c73e38d69b1e8c319288bde006316728c177 not found: ID does not exist" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.598977 4851 scope.go:117] "RemoveContainer" containerID="4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b" Feb 23 13:56:46 crc kubenswrapper[4851]: E0223 13:56:46.599428 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b\": container with ID starting with 4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b not found: ID does not exist" containerID="4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.599455 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b"} err="failed to get container status \"4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b\": rpc error: code = NotFound desc = could not find container \"4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b\": container with ID starting with 4bd646c1f78e920feebd627db560cbdb3308f46232c1de1e1598f45094e5b74b not found: ID does not exist" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.599476 4851 scope.go:117] "RemoveContainer" containerID="f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3" Feb 23 13:56:46 crc kubenswrapper[4851]: E0223 13:56:46.599747 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3\": container with ID starting with f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3 not found: ID does not exist" containerID="f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3" Feb 23 13:56:46 crc kubenswrapper[4851]: I0223 13:56:46.599789 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3"} err="failed to get container status \"f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3\": rpc error: code = NotFound desc = could not find container \"f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3\": container with ID starting with f72b27ba05fa193745d29a9abfcf6adc15e0f58f29bf37f1ee6903bffc149cc3 not found: ID does not exist" Feb 23 13:56:47 crc kubenswrapper[4851]: I0223 13:56:47.985401 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05fb6842-a7ad-4531-b76e-9c42e5071289" path="/var/lib/kubelet/pods/05fb6842-a7ad-4531-b76e-9c42e5071289/volumes" Feb 23 13:56:54 crc kubenswrapper[4851]: I0223 13:56:54.665042 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:54 crc kubenswrapper[4851]: I0223 13:56:54.713426 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:54 crc kubenswrapper[4851]: I0223 13:56:54.898976 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f94nx"] Feb 23 13:56:56 crc kubenswrapper[4851]: I0223 13:56:56.567890 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f94nx" podUID="490f2484-afc5-4187-acc6-575d916ca010" containerName="registry-server" containerID="cri-o://0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d" gracePeriod=2 Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.027567 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.159158 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-utilities\") pod \"490f2484-afc5-4187-acc6-575d916ca010\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.159220 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-catalog-content\") pod \"490f2484-afc5-4187-acc6-575d916ca010\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.159271 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gfrf\" (UniqueName: \"kubernetes.io/projected/490f2484-afc5-4187-acc6-575d916ca010-kube-api-access-2gfrf\") pod \"490f2484-afc5-4187-acc6-575d916ca010\" (UID: \"490f2484-afc5-4187-acc6-575d916ca010\") " Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.160067 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-utilities" (OuterVolumeSpecName: "utilities") pod "490f2484-afc5-4187-acc6-575d916ca010" (UID: "490f2484-afc5-4187-acc6-575d916ca010"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.166477 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490f2484-afc5-4187-acc6-575d916ca010-kube-api-access-2gfrf" (OuterVolumeSpecName: "kube-api-access-2gfrf") pod "490f2484-afc5-4187-acc6-575d916ca010" (UID: "490f2484-afc5-4187-acc6-575d916ca010"). InnerVolumeSpecName "kube-api-access-2gfrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.261564 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.261602 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gfrf\" (UniqueName: \"kubernetes.io/projected/490f2484-afc5-4187-acc6-575d916ca010-kube-api-access-2gfrf\") on node \"crc\" DevicePath \"\"" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.274905 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "490f2484-afc5-4187-acc6-575d916ca010" (UID: "490f2484-afc5-4187-acc6-575d916ca010"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.363124 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/490f2484-afc5-4187-acc6-575d916ca010-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.577898 4851 generic.go:334] "Generic (PLEG): container finished" podID="490f2484-afc5-4187-acc6-575d916ca010" containerID="0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d" exitCode=0 Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.577938 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f94nx" event={"ID":"490f2484-afc5-4187-acc6-575d916ca010","Type":"ContainerDied","Data":"0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d"} Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.577963 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f94nx" event={"ID":"490f2484-afc5-4187-acc6-575d916ca010","Type":"ContainerDied","Data":"ebf72cadde898ec9a95a7ec63bfdecf01c953fcda82d3c6191520d401d0f57d1"} Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.577975 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f94nx" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.577981 4851 scope.go:117] "RemoveContainer" containerID="0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.612634 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f94nx"] Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.614679 4851 scope.go:117] "RemoveContainer" containerID="74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.621624 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f94nx"] Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.632715 4851 scope.go:117] "RemoveContainer" containerID="50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.670207 4851 scope.go:117] "RemoveContainer" containerID="0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d" Feb 23 13:56:57 crc kubenswrapper[4851]: E0223 13:56:57.670791 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d\": container with ID starting with 0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d not found: ID does not exist" containerID="0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.670833 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d"} err="failed to get container status \"0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d\": rpc error: code = NotFound desc = could not find container \"0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d\": container with ID starting with 0328620e88bf1eab0e67aa7867c69fc78bead28e784109eed222b42473d6437d not found: ID does not exist" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.670859 4851 scope.go:117] "RemoveContainer" containerID="74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e" Feb 23 13:56:57 crc kubenswrapper[4851]: E0223 13:56:57.671157 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e\": container with ID starting with 74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e not found: ID does not exist" containerID="74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.671176 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e"} err="failed to get container status \"74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e\": rpc error: code = NotFound desc = could not find container \"74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e\": container with ID starting with 74bfd949e03d3b1c8e9e50a266632acb7e4a385812a113f2473ee6d8caff800e not found: ID does not exist" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.671190 4851 scope.go:117] "RemoveContainer" containerID="50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d" Feb 23 13:56:57 crc kubenswrapper[4851]: E0223 13:56:57.671439 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d\": container with ID starting with 50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d not found: ID does not exist" containerID="50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.671459 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d"} err="failed to get container status \"50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d\": rpc error: code = NotFound desc = could not find container \"50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d\": container with ID starting with 50d642728c9df072dab70c9683cf2c78466e20eaf58b8ff54449d5199717954d not found: ID does not exist" Feb 23 13:56:57 crc kubenswrapper[4851]: I0223 13:56:57.984019 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="490f2484-afc5-4187-acc6-575d916ca010" path="/var/lib/kubelet/pods/490f2484-afc5-4187-acc6-575d916ca010/volumes" Feb 23 13:57:11 crc kubenswrapper[4851]: I0223 13:57:11.924790 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:57:11 crc kubenswrapper[4851]: I0223 13:57:11.925346 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:57:41 crc kubenswrapper[4851]: I0223 13:57:41.924847 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 13:57:41 crc kubenswrapper[4851]: I0223 13:57:41.925400 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 13:57:41 crc kubenswrapper[4851]: I0223 13:57:41.925451 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 13:57:41 crc kubenswrapper[4851]: I0223 13:57:41.926219 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9f5f19ff2b473e08867cb77ac287711f91c24dca567937ee2469e13f633b2f8c"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 13:57:41 crc kubenswrapper[4851]: I0223 13:57:41.926280 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://9f5f19ff2b473e08867cb77ac287711f91c24dca567937ee2469e13f633b2f8c" gracePeriod=600 Feb 23 13:57:42 crc kubenswrapper[4851]: I0223 13:57:42.959305 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="9f5f19ff2b473e08867cb77ac287711f91c24dca567937ee2469e13f633b2f8c" exitCode=0 Feb 23 13:57:42 crc kubenswrapper[4851]: I0223 13:57:42.959358 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"9f5f19ff2b473e08867cb77ac287711f91c24dca567937ee2469e13f633b2f8c"} Feb 23 13:57:42 crc kubenswrapper[4851]: I0223 13:57:42.959674 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab"} Feb 23 13:57:42 crc kubenswrapper[4851]: I0223 13:57:42.959696 4851 scope.go:117] "RemoveContainer" containerID="6e5cd65f308f3f9e8c09a43625dad270f8f5ade549ff310d054c8785610d34e3" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.147209 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx"] Feb 23 14:00:00 crc kubenswrapper[4851]: E0223 14:00:00.148095 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490f2484-afc5-4187-acc6-575d916ca010" containerName="extract-content" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.148108 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="490f2484-afc5-4187-acc6-575d916ca010" containerName="extract-content" Feb 23 14:00:00 crc kubenswrapper[4851]: E0223 14:00:00.148124 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerName="registry-server" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.148131 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerName="registry-server" Feb 23 14:00:00 crc kubenswrapper[4851]: E0223 14:00:00.148144 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerName="extract-content" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.148150 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerName="extract-content" Feb 23 14:00:00 crc kubenswrapper[4851]: E0223 14:00:00.148166 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490f2484-afc5-4187-acc6-575d916ca010" containerName="registry-server" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.148172 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="490f2484-afc5-4187-acc6-575d916ca010" containerName="registry-server" Feb 23 14:00:00 crc kubenswrapper[4851]: E0223 14:00:00.148183 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerName="extract-utilities" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.148189 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerName="extract-utilities" Feb 23 14:00:00 crc kubenswrapper[4851]: E0223 14:00:00.148202 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490f2484-afc5-4187-acc6-575d916ca010" containerName="extract-utilities" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.148210 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="490f2484-afc5-4187-acc6-575d916ca010" containerName="extract-utilities" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.148401 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fb6842-a7ad-4531-b76e-9c42e5071289" containerName="registry-server" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.148415 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="490f2484-afc5-4187-acc6-575d916ca010" containerName="registry-server" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.149046 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.157546 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.157789 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.158888 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx"] Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.316173 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80c6565-4dee-4483-b049-1a075c609eb5-secret-volume\") pod \"collect-profiles-29530920-dfjqx\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.316563 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt79f\" (UniqueName: \"kubernetes.io/projected/b80c6565-4dee-4483-b049-1a075c609eb5-kube-api-access-rt79f\") pod \"collect-profiles-29530920-dfjqx\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.316713 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80c6565-4dee-4483-b049-1a075c609eb5-config-volume\") pod \"collect-profiles-29530920-dfjqx\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.417988 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80c6565-4dee-4483-b049-1a075c609eb5-config-volume\") pod \"collect-profiles-29530920-dfjqx\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.418089 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80c6565-4dee-4483-b049-1a075c609eb5-secret-volume\") pod \"collect-profiles-29530920-dfjqx\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.418149 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt79f\" (UniqueName: \"kubernetes.io/projected/b80c6565-4dee-4483-b049-1a075c609eb5-kube-api-access-rt79f\") pod \"collect-profiles-29530920-dfjqx\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.418855 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80c6565-4dee-4483-b049-1a075c609eb5-config-volume\") pod \"collect-profiles-29530920-dfjqx\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.425425 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80c6565-4dee-4483-b049-1a075c609eb5-secret-volume\") pod \"collect-profiles-29530920-dfjqx\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.437259 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt79f\" (UniqueName: \"kubernetes.io/projected/b80c6565-4dee-4483-b049-1a075c609eb5-kube-api-access-rt79f\") pod \"collect-profiles-29530920-dfjqx\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.484683 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:00 crc kubenswrapper[4851]: I0223 14:00:00.941290 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx"] Feb 23 14:00:01 crc kubenswrapper[4851]: I0223 14:00:01.128553 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" event={"ID":"b80c6565-4dee-4483-b049-1a075c609eb5","Type":"ContainerStarted","Data":"15ac09ca32c4de3bfa09b4fdddfe9f1bca363309c6ad2b2b99c25841b81e499f"} Feb 23 14:00:02 crc kubenswrapper[4851]: I0223 14:00:02.139487 4851 generic.go:334] "Generic (PLEG): container finished" podID="b80c6565-4dee-4483-b049-1a075c609eb5" containerID="f6da5497553137784b69e676de538144cb02bb3dba3adeb875d2259eb0e43a62" exitCode=0 Feb 23 14:00:02 crc kubenswrapper[4851]: I0223 14:00:02.139590 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" event={"ID":"b80c6565-4dee-4483-b049-1a075c609eb5","Type":"ContainerDied","Data":"f6da5497553137784b69e676de538144cb02bb3dba3adeb875d2259eb0e43a62"} Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.477878 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.581039 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80c6565-4dee-4483-b049-1a075c609eb5-secret-volume\") pod \"b80c6565-4dee-4483-b049-1a075c609eb5\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.581124 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80c6565-4dee-4483-b049-1a075c609eb5-config-volume\") pod \"b80c6565-4dee-4483-b049-1a075c609eb5\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.581217 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt79f\" (UniqueName: \"kubernetes.io/projected/b80c6565-4dee-4483-b049-1a075c609eb5-kube-api-access-rt79f\") pod \"b80c6565-4dee-4483-b049-1a075c609eb5\" (UID: \"b80c6565-4dee-4483-b049-1a075c609eb5\") " Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.582019 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b80c6565-4dee-4483-b049-1a075c609eb5-config-volume" (OuterVolumeSpecName: "config-volume") pod "b80c6565-4dee-4483-b049-1a075c609eb5" (UID: "b80c6565-4dee-4483-b049-1a075c609eb5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.589153 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b80c6565-4dee-4483-b049-1a075c609eb5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b80c6565-4dee-4483-b049-1a075c609eb5" (UID: "b80c6565-4dee-4483-b049-1a075c609eb5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.589171 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b80c6565-4dee-4483-b049-1a075c609eb5-kube-api-access-rt79f" (OuterVolumeSpecName: "kube-api-access-rt79f") pod "b80c6565-4dee-4483-b049-1a075c609eb5" (UID: "b80c6565-4dee-4483-b049-1a075c609eb5"). InnerVolumeSpecName "kube-api-access-rt79f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.682817 4851 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b80c6565-4dee-4483-b049-1a075c609eb5-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.682853 4851 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b80c6565-4dee-4483-b049-1a075c609eb5-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 14:00:03 crc kubenswrapper[4851]: I0223 14:00:03.682864 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt79f\" (UniqueName: \"kubernetes.io/projected/b80c6565-4dee-4483-b049-1a075c609eb5-kube-api-access-rt79f\") on node \"crc\" DevicePath \"\"" Feb 23 14:00:04 crc kubenswrapper[4851]: I0223 14:00:04.157668 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" event={"ID":"b80c6565-4dee-4483-b049-1a075c609eb5","Type":"ContainerDied","Data":"15ac09ca32c4de3bfa09b4fdddfe9f1bca363309c6ad2b2b99c25841b81e499f"} Feb 23 14:00:04 crc kubenswrapper[4851]: I0223 14:00:04.157707 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ac09ca32c4de3bfa09b4fdddfe9f1bca363309c6ad2b2b99c25841b81e499f" Feb 23 14:00:04 crc kubenswrapper[4851]: I0223 14:00:04.157769 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530920-dfjqx" Feb 23 14:00:04 crc kubenswrapper[4851]: I0223 14:00:04.554867 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9"] Feb 23 14:00:04 crc kubenswrapper[4851]: I0223 14:00:04.563620 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530875-xcnh9"] Feb 23 14:00:05 crc kubenswrapper[4851]: I0223 14:00:05.989908 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a565598-3ebd-4ad0-a2e9-7c06501d8e1b" path="/var/lib/kubelet/pods/4a565598-3ebd-4ad0-a2e9-7c06501d8e1b/volumes" Feb 23 14:00:11 crc kubenswrapper[4851]: I0223 14:00:11.925127 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:00:11 crc kubenswrapper[4851]: I0223 14:00:11.925951 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 14:00:39 crc kubenswrapper[4851]: I0223 14:00:39.490032 4851 scope.go:117] "RemoveContainer" containerID="23bce3b235d93ac46533a3a90f560b626d45cbf82c07a17f5b6765ad5bbef436" Feb 23 14:00:41 crc kubenswrapper[4851]: I0223 14:00:41.925268 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:00:41 crc kubenswrapper[4851]: I0223 14:00:41.925965 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.146983 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29530921-92sp9"] Feb 23 14:01:00 crc kubenswrapper[4851]: E0223 14:01:00.147902 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b80c6565-4dee-4483-b049-1a075c609eb5" containerName="collect-profiles" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.147919 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b80c6565-4dee-4483-b049-1a075c609eb5" containerName="collect-profiles" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.148110 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b80c6565-4dee-4483-b049-1a075c609eb5" containerName="collect-profiles" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.148926 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.172341 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29530921-92sp9"] Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.293302 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-fernet-keys\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.293387 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjz5j\" (UniqueName: \"kubernetes.io/projected/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-kube-api-access-qjz5j\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.293470 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-combined-ca-bundle\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.293514 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-config-data\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.395027 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-fernet-keys\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.395125 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjz5j\" (UniqueName: \"kubernetes.io/projected/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-kube-api-access-qjz5j\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.395188 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-combined-ca-bundle\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.395277 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-config-data\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.401560 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-fernet-keys\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.402048 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-config-data\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.406974 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-combined-ca-bundle\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.416700 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjz5j\" (UniqueName: \"kubernetes.io/projected/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-kube-api-access-qjz5j\") pod \"keystone-cron-29530921-92sp9\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.479457 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:00 crc kubenswrapper[4851]: I0223 14:01:00.939401 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29530921-92sp9"] Feb 23 14:01:01 crc kubenswrapper[4851]: I0223 14:01:01.676857 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530921-92sp9" event={"ID":"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e","Type":"ContainerStarted","Data":"ab31121d0846f92b7753dc4f1b4bd61e011bfd68fe2fe73fa1c029ddea551050"} Feb 23 14:01:01 crc kubenswrapper[4851]: I0223 14:01:01.677139 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530921-92sp9" event={"ID":"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e","Type":"ContainerStarted","Data":"69e599c324556739e1b99eb2b5ab69bf374e68e6cb1e757771f3898e012add66"} Feb 23 14:01:03 crc kubenswrapper[4851]: I0223 14:01:03.699306 4851 generic.go:334] "Generic (PLEG): container finished" podID="9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e" containerID="ab31121d0846f92b7753dc4f1b4bd61e011bfd68fe2fe73fa1c029ddea551050" exitCode=0 Feb 23 14:01:03 crc kubenswrapper[4851]: I0223 14:01:03.699368 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530921-92sp9" event={"ID":"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e","Type":"ContainerDied","Data":"ab31121d0846f92b7753dc4f1b4bd61e011bfd68fe2fe73fa1c029ddea551050"} Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.033991 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.094893 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjz5j\" (UniqueName: \"kubernetes.io/projected/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-kube-api-access-qjz5j\") pod \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.095267 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-combined-ca-bundle\") pod \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.095456 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-fernet-keys\") pod \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.095508 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-config-data\") pod \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\" (UID: \"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e\") " Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.106379 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e" (UID: "9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.116147 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-kube-api-access-qjz5j" (OuterVolumeSpecName: "kube-api-access-qjz5j") pod "9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e" (UID: "9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e"). InnerVolumeSpecName "kube-api-access-qjz5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.136175 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e" (UID: "9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.159321 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-config-data" (OuterVolumeSpecName: "config-data") pod "9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e" (UID: "9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.197453 4851 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.197482 4851 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.197491 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.197500 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjz5j\" (UniqueName: \"kubernetes.io/projected/9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e-kube-api-access-qjz5j\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.715159 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29530921-92sp9" event={"ID":"9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e","Type":"ContainerDied","Data":"69e599c324556739e1b99eb2b5ab69bf374e68e6cb1e757771f3898e012add66"} Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.715191 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29530921-92sp9" Feb 23 14:01:05 crc kubenswrapper[4851]: I0223 14:01:05.715194 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e599c324556739e1b99eb2b5ab69bf374e68e6cb1e757771f3898e012add66" Feb 23 14:01:07 crc kubenswrapper[4851]: I0223 14:01:07.731300 4851 generic.go:334] "Generic (PLEG): container finished" podID="85d7dda0-1545-4b56-9694-c704cfec078c" containerID="5c865590b2de663985b68dd807e9a832d15255b9c543380aa1c9cd27a462015c" exitCode=0 Feb 23 14:01:07 crc kubenswrapper[4851]: I0223 14:01:07.731388 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"85d7dda0-1545-4b56-9694-c704cfec078c","Type":"ContainerDied","Data":"5c865590b2de663985b68dd807e9a832d15255b9c543380aa1c9cd27a462015c"} Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.120813 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.267148 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config\") pod \"85d7dda0-1545-4b56-9694-c704cfec078c\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.267283 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-config-data\") pod \"85d7dda0-1545-4b56-9694-c704cfec078c\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.267382 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-temporary\") pod \"85d7dda0-1545-4b56-9694-c704cfec078c\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.267437 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-workdir\") pod \"85d7dda0-1545-4b56-9694-c704cfec078c\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.267457 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ca-certs\") pod \"85d7dda0-1545-4b56-9694-c704cfec078c\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.267615 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ssh-key\") pod \"85d7dda0-1545-4b56-9694-c704cfec078c\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.267643 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srsqs\" (UniqueName: \"kubernetes.io/projected/85d7dda0-1545-4b56-9694-c704cfec078c-kube-api-access-srsqs\") pod \"85d7dda0-1545-4b56-9694-c704cfec078c\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.267994 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "85d7dda0-1545-4b56-9694-c704cfec078c" (UID: "85d7dda0-1545-4b56-9694-c704cfec078c"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.268200 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-config-data" (OuterVolumeSpecName: "config-data") pod "85d7dda0-1545-4b56-9694-c704cfec078c" (UID: "85d7dda0-1545-4b56-9694-c704cfec078c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.268132 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config-secret\") pod \"85d7dda0-1545-4b56-9694-c704cfec078c\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.268545 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"85d7dda0-1545-4b56-9694-c704cfec078c\" (UID: \"85d7dda0-1545-4b56-9694-c704cfec078c\") " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.269263 4851 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.269281 4851 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.271360 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "85d7dda0-1545-4b56-9694-c704cfec078c" (UID: "85d7dda0-1545-4b56-9694-c704cfec078c"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.273757 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d7dda0-1545-4b56-9694-c704cfec078c-kube-api-access-srsqs" (OuterVolumeSpecName: "kube-api-access-srsqs") pod "85d7dda0-1545-4b56-9694-c704cfec078c" (UID: "85d7dda0-1545-4b56-9694-c704cfec078c"). InnerVolumeSpecName "kube-api-access-srsqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.274474 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "85d7dda0-1545-4b56-9694-c704cfec078c" (UID: "85d7dda0-1545-4b56-9694-c704cfec078c"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.298875 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "85d7dda0-1545-4b56-9694-c704cfec078c" (UID: "85d7dda0-1545-4b56-9694-c704cfec078c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.301019 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "85d7dda0-1545-4b56-9694-c704cfec078c" (UID: "85d7dda0-1545-4b56-9694-c704cfec078c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.308523 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "85d7dda0-1545-4b56-9694-c704cfec078c" (UID: "85d7dda0-1545-4b56-9694-c704cfec078c"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.334400 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "85d7dda0-1545-4b56-9694-c704cfec078c" (UID: "85d7dda0-1545-4b56-9694-c704cfec078c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.371396 4851 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.371428 4851 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/85d7dda0-1545-4b56-9694-c704cfec078c-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.371443 4851 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.371469 4851 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.371483 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srsqs\" (UniqueName: \"kubernetes.io/projected/85d7dda0-1545-4b56-9694-c704cfec078c-kube-api-access-srsqs\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.371495 4851 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85d7dda0-1545-4b56-9694-c704cfec078c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.371524 4851 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.390010 4851 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.473381 4851 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.750095 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"85d7dda0-1545-4b56-9694-c704cfec078c","Type":"ContainerDied","Data":"20fe9faee03a8821f38d089b187dbe5603e05c2c6af08453f0bd9d424e4130b1"} Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.750132 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20fe9faee03a8821f38d089b187dbe5603e05c2c6af08453f0bd9d424e4130b1" Feb 23 14:01:09 crc kubenswrapper[4851]: I0223 14:01:09.750161 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 14:01:11 crc kubenswrapper[4851]: I0223 14:01:11.924767 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:01:11 crc kubenswrapper[4851]: I0223 14:01:11.925457 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 14:01:11 crc kubenswrapper[4851]: I0223 14:01:11.925508 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 14:01:11 crc kubenswrapper[4851]: I0223 14:01:11.926112 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 14:01:11 crc kubenswrapper[4851]: I0223 14:01:11.926185 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" gracePeriod=600 Feb 23 14:01:12 crc kubenswrapper[4851]: E0223 14:01:12.060314 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:01:12 crc kubenswrapper[4851]: I0223 14:01:12.781090 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" exitCode=0 Feb 23 14:01:12 crc kubenswrapper[4851]: I0223 14:01:12.781145 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab"} Feb 23 14:01:12 crc kubenswrapper[4851]: I0223 14:01:12.781189 4851 scope.go:117] "RemoveContainer" containerID="9f5f19ff2b473e08867cb77ac287711f91c24dca567937ee2469e13f633b2f8c" Feb 23 14:01:12 crc kubenswrapper[4851]: I0223 14:01:12.782062 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:01:12 crc kubenswrapper[4851]: E0223 14:01:12.782445 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:01:16 crc kubenswrapper[4851]: I0223 14:01:16.909634 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 14:01:16 crc kubenswrapper[4851]: E0223 14:01:16.910476 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e" containerName="keystone-cron" Feb 23 14:01:16 crc kubenswrapper[4851]: I0223 14:01:16.910487 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e" containerName="keystone-cron" Feb 23 14:01:16 crc kubenswrapper[4851]: E0223 14:01:16.910513 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d7dda0-1545-4b56-9694-c704cfec078c" containerName="tempest-tests-tempest-tests-runner" Feb 23 14:01:16 crc kubenswrapper[4851]: I0223 14:01:16.910519 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d7dda0-1545-4b56-9694-c704cfec078c" containerName="tempest-tests-tempest-tests-runner" Feb 23 14:01:16 crc kubenswrapper[4851]: I0223 14:01:16.910712 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d7dda0-1545-4b56-9694-c704cfec078c" containerName="tempest-tests-tempest-tests-runner" Feb 23 14:01:16 crc kubenswrapper[4851]: I0223 14:01:16.910720 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e" containerName="keystone-cron" Feb 23 14:01:16 crc kubenswrapper[4851]: I0223 14:01:16.911307 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 14:01:16 crc kubenswrapper[4851]: I0223 14:01:16.913705 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pj67j" Feb 23 14:01:16 crc kubenswrapper[4851]: I0223 14:01:16.920625 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.014150 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khcm5\" (UniqueName: \"kubernetes.io/projected/68dbe829-aaf9-45eb-9b13-1c7e73a34cb6-kube-api-access-khcm5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68dbe829-aaf9-45eb-9b13-1c7e73a34cb6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.014918 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68dbe829-aaf9-45eb-9b13-1c7e73a34cb6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.117169 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68dbe829-aaf9-45eb-9b13-1c7e73a34cb6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.117362 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khcm5\" (UniqueName: \"kubernetes.io/projected/68dbe829-aaf9-45eb-9b13-1c7e73a34cb6-kube-api-access-khcm5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68dbe829-aaf9-45eb-9b13-1c7e73a34cb6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.118057 4851 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68dbe829-aaf9-45eb-9b13-1c7e73a34cb6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.135954 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khcm5\" (UniqueName: \"kubernetes.io/projected/68dbe829-aaf9-45eb-9b13-1c7e73a34cb6-kube-api-access-khcm5\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68dbe829-aaf9-45eb-9b13-1c7e73a34cb6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.143079 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"68dbe829-aaf9-45eb-9b13-1c7e73a34cb6\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.244783 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.689560 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 14:01:17 crc kubenswrapper[4851]: I0223 14:01:17.831872 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"68dbe829-aaf9-45eb-9b13-1c7e73a34cb6","Type":"ContainerStarted","Data":"d30f626f35b90db2383a0a6e0c4836e7387795b2fc97ac8aea3d00ea5a641f96"} Feb 23 14:01:18 crc kubenswrapper[4851]: I0223 14:01:18.842024 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"68dbe829-aaf9-45eb-9b13-1c7e73a34cb6","Type":"ContainerStarted","Data":"57526c7ac0eee197507f91cb4bc77b52cfc3ad075bafbc2853d13a2ad24e82c9"} Feb 23 14:01:18 crc kubenswrapper[4851]: I0223 14:01:18.856417 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.027940852 podStartE2EDuration="2.856397798s" podCreationTimestamp="2026-02-23 14:01:16 +0000 UTC" firstStartedPulling="2026-02-23 14:01:17.697502358 +0000 UTC m=+3232.379206036" lastFinishedPulling="2026-02-23 14:01:18.525959304 +0000 UTC m=+3233.207662982" observedRunningTime="2026-02-23 14:01:18.855574534 +0000 UTC m=+3233.537278222" watchObservedRunningTime="2026-02-23 14:01:18.856397798 +0000 UTC m=+3233.538101476" Feb 23 14:01:24 crc kubenswrapper[4851]: I0223 14:01:24.969394 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:01:24 crc kubenswrapper[4851]: E0223 14:01:24.970286 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:01:39 crc kubenswrapper[4851]: I0223 14:01:39.968532 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:01:39 crc kubenswrapper[4851]: E0223 14:01:39.969406 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.099602 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kjx6d/must-gather-cc4zr"] Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.101444 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.116444 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kjx6d"/"openshift-service-ca.crt" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.116516 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kjx6d"/"default-dockercfg-2grxt" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.116574 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kjx6d"/"kube-root-ca.crt" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.124772 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kjx6d/must-gather-cc4zr"] Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.157577 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jrs2\" (UniqueName: \"kubernetes.io/projected/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-kube-api-access-6jrs2\") pod \"must-gather-cc4zr\" (UID: \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\") " pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.157742 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-must-gather-output\") pod \"must-gather-cc4zr\" (UID: \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\") " pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.259727 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jrs2\" (UniqueName: \"kubernetes.io/projected/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-kube-api-access-6jrs2\") pod \"must-gather-cc4zr\" (UID: \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\") " pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.260122 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-must-gather-output\") pod \"must-gather-cc4zr\" (UID: \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\") " pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.260588 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-must-gather-output\") pod \"must-gather-cc4zr\" (UID: \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\") " pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.283262 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jrs2\" (UniqueName: \"kubernetes.io/projected/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-kube-api-access-6jrs2\") pod \"must-gather-cc4zr\" (UID: \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\") " pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.432650 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.878678 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kjx6d/must-gather-cc4zr"] Feb 23 14:01:40 crc kubenswrapper[4851]: W0223 14:01:40.888323 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod064ec4aa_abbc_4ff6_9550_eda3ba5ed23c.slice/crio-576ee4e08b4ecf9940f975691dd4b3334046cfaa3d2504e9d454851f555ee3f6 WatchSource:0}: Error finding container 576ee4e08b4ecf9940f975691dd4b3334046cfaa3d2504e9d454851f555ee3f6: Status 404 returned error can't find the container with id 576ee4e08b4ecf9940f975691dd4b3334046cfaa3d2504e9d454851f555ee3f6 Feb 23 14:01:40 crc kubenswrapper[4851]: I0223 14:01:40.891645 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 14:01:41 crc kubenswrapper[4851]: I0223 14:01:41.029162 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" event={"ID":"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c","Type":"ContainerStarted","Data":"576ee4e08b4ecf9940f975691dd4b3334046cfaa3d2504e9d454851f555ee3f6"} Feb 23 14:01:47 crc kubenswrapper[4851]: I0223 14:01:47.088402 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" event={"ID":"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c","Type":"ContainerStarted","Data":"ddcae191c926ed8a40ccd0e061e35c51dd83d35aad190c3b1dc1d04c47418fa5"} Feb 23 14:01:47 crc kubenswrapper[4851]: I0223 14:01:47.088854 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" event={"ID":"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c","Type":"ContainerStarted","Data":"e8d1f24f6eac87bab66646102acc5489bc18efe85d67f25b377617e8107bb945"} Feb 23 14:01:47 crc kubenswrapper[4851]: I0223 14:01:47.110317 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" podStartSLOduration=1.503852515 podStartE2EDuration="7.110297994s" podCreationTimestamp="2026-02-23 14:01:40 +0000 UTC" firstStartedPulling="2026-02-23 14:01:40.891469349 +0000 UTC m=+3255.573173027" lastFinishedPulling="2026-02-23 14:01:46.497914828 +0000 UTC m=+3261.179618506" observedRunningTime="2026-02-23 14:01:47.102260007 +0000 UTC m=+3261.783963705" watchObservedRunningTime="2026-02-23 14:01:47.110297994 +0000 UTC m=+3261.792001672" Feb 23 14:01:49 crc kubenswrapper[4851]: E0223 14:01:49.347538 4851 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.5:52228->38.102.83.5:43093: read tcp 38.102.83.5:52228->38.102.83.5:43093: read: connection reset by peer Feb 23 14:01:50 crc kubenswrapper[4851]: I0223 14:01:50.013366 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kjx6d/crc-debug-t6pvc"] Feb 23 14:01:50 crc kubenswrapper[4851]: I0223 14:01:50.014649 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:01:50 crc kubenswrapper[4851]: I0223 14:01:50.053643 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6c4\" (UniqueName: \"kubernetes.io/projected/ebdce425-9208-4a4f-a563-0185e33a0c8a-kube-api-access-zg6c4\") pod \"crc-debug-t6pvc\" (UID: \"ebdce425-9208-4a4f-a563-0185e33a0c8a\") " pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:01:50 crc kubenswrapper[4851]: I0223 14:01:50.053728 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ebdce425-9208-4a4f-a563-0185e33a0c8a-host\") pod \"crc-debug-t6pvc\" (UID: \"ebdce425-9208-4a4f-a563-0185e33a0c8a\") " pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:01:50 crc kubenswrapper[4851]: I0223 14:01:50.154983 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg6c4\" (UniqueName: \"kubernetes.io/projected/ebdce425-9208-4a4f-a563-0185e33a0c8a-kube-api-access-zg6c4\") pod \"crc-debug-t6pvc\" (UID: \"ebdce425-9208-4a4f-a563-0185e33a0c8a\") " pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:01:50 crc kubenswrapper[4851]: I0223 14:01:50.155104 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ebdce425-9208-4a4f-a563-0185e33a0c8a-host\") pod \"crc-debug-t6pvc\" (UID: \"ebdce425-9208-4a4f-a563-0185e33a0c8a\") " pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:01:50 crc kubenswrapper[4851]: I0223 14:01:50.155213 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ebdce425-9208-4a4f-a563-0185e33a0c8a-host\") pod \"crc-debug-t6pvc\" (UID: \"ebdce425-9208-4a4f-a563-0185e33a0c8a\") " pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:01:50 crc kubenswrapper[4851]: I0223 14:01:50.185105 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg6c4\" (UniqueName: \"kubernetes.io/projected/ebdce425-9208-4a4f-a563-0185e33a0c8a-kube-api-access-zg6c4\") pod \"crc-debug-t6pvc\" (UID: \"ebdce425-9208-4a4f-a563-0185e33a0c8a\") " pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:01:50 crc kubenswrapper[4851]: I0223 14:01:50.335630 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:01:50 crc kubenswrapper[4851]: W0223 14:01:50.366666 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebdce425_9208_4a4f_a563_0185e33a0c8a.slice/crio-ed45419356db861c452475636e89e1671b2cb60f2478b1c03ade4a05b9176c87 WatchSource:0}: Error finding container ed45419356db861c452475636e89e1671b2cb60f2478b1c03ade4a05b9176c87: Status 404 returned error can't find the container with id ed45419356db861c452475636e89e1671b2cb60f2478b1c03ade4a05b9176c87 Feb 23 14:01:51 crc kubenswrapper[4851]: I0223 14:01:51.125705 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" event={"ID":"ebdce425-9208-4a4f-a563-0185e33a0c8a","Type":"ContainerStarted","Data":"ed45419356db861c452475636e89e1671b2cb60f2478b1c03ade4a05b9176c87"} Feb 23 14:01:51 crc kubenswrapper[4851]: I0223 14:01:51.968662 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:01:51 crc kubenswrapper[4851]: E0223 14:01:51.969076 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:02:03 crc kubenswrapper[4851]: I0223 14:02:03.266049 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" event={"ID":"ebdce425-9208-4a4f-a563-0185e33a0c8a","Type":"ContainerStarted","Data":"31c08c2be232c7a46b585957b76a1de6a1137962588ba89242adedac8d6281b4"} Feb 23 14:02:03 crc kubenswrapper[4851]: I0223 14:02:03.280624 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" podStartSLOduration=2.147252364 podStartE2EDuration="14.280605621s" podCreationTimestamp="2026-02-23 14:01:49 +0000 UTC" firstStartedPulling="2026-02-23 14:01:50.369162882 +0000 UTC m=+3265.050866550" lastFinishedPulling="2026-02-23 14:02:02.502516129 +0000 UTC m=+3277.184219807" observedRunningTime="2026-02-23 14:02:03.276755522 +0000 UTC m=+3277.958459220" watchObservedRunningTime="2026-02-23 14:02:03.280605621 +0000 UTC m=+3277.962309299" Feb 23 14:02:03 crc kubenswrapper[4851]: I0223 14:02:03.973476 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:02:03 crc kubenswrapper[4851]: E0223 14:02:03.973954 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:02:17 crc kubenswrapper[4851]: I0223 14:02:17.974928 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:02:17 crc kubenswrapper[4851]: E0223 14:02:17.975604 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:02:31 crc kubenswrapper[4851]: I0223 14:02:31.968702 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:02:31 crc kubenswrapper[4851]: E0223 14:02:31.969601 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:02:40 crc kubenswrapper[4851]: I0223 14:02:40.625083 4851 generic.go:334] "Generic (PLEG): container finished" podID="ebdce425-9208-4a4f-a563-0185e33a0c8a" containerID="31c08c2be232c7a46b585957b76a1de6a1137962588ba89242adedac8d6281b4" exitCode=0 Feb 23 14:02:40 crc kubenswrapper[4851]: I0223 14:02:40.625264 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" event={"ID":"ebdce425-9208-4a4f-a563-0185e33a0c8a","Type":"ContainerDied","Data":"31c08c2be232c7a46b585957b76a1de6a1137962588ba89242adedac8d6281b4"} Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.758539 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.800909 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kjx6d/crc-debug-t6pvc"] Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.815791 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kjx6d/crc-debug-t6pvc"] Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.885082 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg6c4\" (UniqueName: \"kubernetes.io/projected/ebdce425-9208-4a4f-a563-0185e33a0c8a-kube-api-access-zg6c4\") pod \"ebdce425-9208-4a4f-a563-0185e33a0c8a\" (UID: \"ebdce425-9208-4a4f-a563-0185e33a0c8a\") " Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.885174 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ebdce425-9208-4a4f-a563-0185e33a0c8a-host\") pod \"ebdce425-9208-4a4f-a563-0185e33a0c8a\" (UID: \"ebdce425-9208-4a4f-a563-0185e33a0c8a\") " Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.885590 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebdce425-9208-4a4f-a563-0185e33a0c8a-host" (OuterVolumeSpecName: "host") pod "ebdce425-9208-4a4f-a563-0185e33a0c8a" (UID: "ebdce425-9208-4a4f-a563-0185e33a0c8a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.885826 4851 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ebdce425-9208-4a4f-a563-0185e33a0c8a-host\") on node \"crc\" DevicePath \"\"" Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.890963 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebdce425-9208-4a4f-a563-0185e33a0c8a-kube-api-access-zg6c4" (OuterVolumeSpecName: "kube-api-access-zg6c4") pod "ebdce425-9208-4a4f-a563-0185e33a0c8a" (UID: "ebdce425-9208-4a4f-a563-0185e33a0c8a"). InnerVolumeSpecName "kube-api-access-zg6c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.977575 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebdce425-9208-4a4f-a563-0185e33a0c8a" path="/var/lib/kubelet/pods/ebdce425-9208-4a4f-a563-0185e33a0c8a/volumes" Feb 23 14:02:41 crc kubenswrapper[4851]: I0223 14:02:41.986924 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg6c4\" (UniqueName: \"kubernetes.io/projected/ebdce425-9208-4a4f-a563-0185e33a0c8a-kube-api-access-zg6c4\") on node \"crc\" DevicePath \"\"" Feb 23 14:02:42 crc kubenswrapper[4851]: E0223 14:02:42.151158 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebdce425_9208_4a4f_a563_0185e33a0c8a.slice/crio-ed45419356db861c452475636e89e1671b2cb60f2478b1c03ade4a05b9176c87\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebdce425_9208_4a4f_a563_0185e33a0c8a.slice\": RecentStats: unable to find data in memory cache]" Feb 23 14:02:42 crc kubenswrapper[4851]: I0223 14:02:42.644726 4851 scope.go:117] "RemoveContainer" containerID="31c08c2be232c7a46b585957b76a1de6a1137962588ba89242adedac8d6281b4" Feb 23 14:02:42 crc kubenswrapper[4851]: I0223 14:02:42.644776 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-t6pvc" Feb 23 14:02:42 crc kubenswrapper[4851]: I0223 14:02:42.943203 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kjx6d/crc-debug-f9czj"] Feb 23 14:02:42 crc kubenswrapper[4851]: E0223 14:02:42.943592 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebdce425-9208-4a4f-a563-0185e33a0c8a" containerName="container-00" Feb 23 14:02:42 crc kubenswrapper[4851]: I0223 14:02:42.943605 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebdce425-9208-4a4f-a563-0185e33a0c8a" containerName="container-00" Feb 23 14:02:42 crc kubenswrapper[4851]: I0223 14:02:42.943782 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebdce425-9208-4a4f-a563-0185e33a0c8a" containerName="container-00" Feb 23 14:02:42 crc kubenswrapper[4851]: I0223 14:02:42.944307 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.107786 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2q5h\" (UniqueName: \"kubernetes.io/projected/c0c1cef3-2d68-454b-b231-d70189a5d9cd-kube-api-access-b2q5h\") pod \"crc-debug-f9czj\" (UID: \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\") " pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.107899 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0c1cef3-2d68-454b-b231-d70189a5d9cd-host\") pod \"crc-debug-f9czj\" (UID: \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\") " pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.210178 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2q5h\" (UniqueName: \"kubernetes.io/projected/c0c1cef3-2d68-454b-b231-d70189a5d9cd-kube-api-access-b2q5h\") pod \"crc-debug-f9czj\" (UID: \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\") " pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.210280 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0c1cef3-2d68-454b-b231-d70189a5d9cd-host\") pod \"crc-debug-f9czj\" (UID: \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\") " pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.210527 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0c1cef3-2d68-454b-b231-d70189a5d9cd-host\") pod \"crc-debug-f9czj\" (UID: \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\") " pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.227982 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2q5h\" (UniqueName: \"kubernetes.io/projected/c0c1cef3-2d68-454b-b231-d70189a5d9cd-kube-api-access-b2q5h\") pod \"crc-debug-f9czj\" (UID: \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\") " pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.260965 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.655054 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/crc-debug-f9czj" event={"ID":"c0c1cef3-2d68-454b-b231-d70189a5d9cd","Type":"ContainerStarted","Data":"766eb4d59123821bf77895658d2347c1cca905859e3c92147e7a7ea70c10f34b"} Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.655379 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/crc-debug-f9czj" event={"ID":"c0c1cef3-2d68-454b-b231-d70189a5d9cd","Type":"ContainerStarted","Data":"904e3a960b57827fc3c29f06a0e8c5ce49d982ea887152260b5cf866dc49265e"} Feb 23 14:02:43 crc kubenswrapper[4851]: I0223 14:02:43.669234 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kjx6d/crc-debug-f9czj" podStartSLOduration=1.669168425 podStartE2EDuration="1.669168425s" podCreationTimestamp="2026-02-23 14:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 14:02:43.667273591 +0000 UTC m=+3318.348977289" watchObservedRunningTime="2026-02-23 14:02:43.669168425 +0000 UTC m=+3318.350872113" Feb 23 14:02:44 crc kubenswrapper[4851]: I0223 14:02:44.665086 4851 generic.go:334] "Generic (PLEG): container finished" podID="c0c1cef3-2d68-454b-b231-d70189a5d9cd" containerID="766eb4d59123821bf77895658d2347c1cca905859e3c92147e7a7ea70c10f34b" exitCode=0 Feb 23 14:02:44 crc kubenswrapper[4851]: I0223 14:02:44.665125 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/crc-debug-f9czj" event={"ID":"c0c1cef3-2d68-454b-b231-d70189a5d9cd","Type":"ContainerDied","Data":"766eb4d59123821bf77895658d2347c1cca905859e3c92147e7a7ea70c10f34b"} Feb 23 14:02:44 crc kubenswrapper[4851]: I0223 14:02:44.968988 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:02:44 crc kubenswrapper[4851]: E0223 14:02:44.969854 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.806843 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.840966 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kjx6d/crc-debug-f9czj"] Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.848256 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kjx6d/crc-debug-f9czj"] Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.858765 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2q5h\" (UniqueName: \"kubernetes.io/projected/c0c1cef3-2d68-454b-b231-d70189a5d9cd-kube-api-access-b2q5h\") pod \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\" (UID: \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\") " Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.858918 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0c1cef3-2d68-454b-b231-d70189a5d9cd-host\") pod \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\" (UID: \"c0c1cef3-2d68-454b-b231-d70189a5d9cd\") " Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.859070 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0c1cef3-2d68-454b-b231-d70189a5d9cd-host" (OuterVolumeSpecName: "host") pod "c0c1cef3-2d68-454b-b231-d70189a5d9cd" (UID: "c0c1cef3-2d68-454b-b231-d70189a5d9cd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.859600 4851 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0c1cef3-2d68-454b-b231-d70189a5d9cd-host\") on node \"crc\" DevicePath \"\"" Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.863686 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c1cef3-2d68-454b-b231-d70189a5d9cd-kube-api-access-b2q5h" (OuterVolumeSpecName: "kube-api-access-b2q5h") pod "c0c1cef3-2d68-454b-b231-d70189a5d9cd" (UID: "c0c1cef3-2d68-454b-b231-d70189a5d9cd"). InnerVolumeSpecName "kube-api-access-b2q5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.962613 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2q5h\" (UniqueName: \"kubernetes.io/projected/c0c1cef3-2d68-454b-b231-d70189a5d9cd-kube-api-access-b2q5h\") on node \"crc\" DevicePath \"\"" Feb 23 14:02:45 crc kubenswrapper[4851]: I0223 14:02:45.980587 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c1cef3-2d68-454b-b231-d70189a5d9cd" path="/var/lib/kubelet/pods/c0c1cef3-2d68-454b-b231-d70189a5d9cd/volumes" Feb 23 14:02:46 crc kubenswrapper[4851]: I0223 14:02:46.681511 4851 scope.go:117] "RemoveContainer" containerID="766eb4d59123821bf77895658d2347c1cca905859e3c92147e7a7ea70c10f34b" Feb 23 14:02:46 crc kubenswrapper[4851]: I0223 14:02:46.681907 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-f9czj" Feb 23 14:02:46 crc kubenswrapper[4851]: I0223 14:02:46.969563 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kjx6d/crc-debug-hgd89"] Feb 23 14:02:46 crc kubenswrapper[4851]: E0223 14:02:46.969959 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c1cef3-2d68-454b-b231-d70189a5d9cd" containerName="container-00" Feb 23 14:02:46 crc kubenswrapper[4851]: I0223 14:02:46.969975 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c1cef3-2d68-454b-b231-d70189a5d9cd" containerName="container-00" Feb 23 14:02:46 crc kubenswrapper[4851]: I0223 14:02:46.970200 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c1cef3-2d68-454b-b231-d70189a5d9cd" containerName="container-00" Feb 23 14:02:46 crc kubenswrapper[4851]: I0223 14:02:46.970887 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.085766 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwh8\" (UniqueName: \"kubernetes.io/projected/b1a0ceb2-3749-4801-9dee-49b716bb7e41-kube-api-access-zkwh8\") pod \"crc-debug-hgd89\" (UID: \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\") " pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.086231 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a0ceb2-3749-4801-9dee-49b716bb7e41-host\") pod \"crc-debug-hgd89\" (UID: \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\") " pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.188281 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a0ceb2-3749-4801-9dee-49b716bb7e41-host\") pod \"crc-debug-hgd89\" (UID: \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\") " pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.188404 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a0ceb2-3749-4801-9dee-49b716bb7e41-host\") pod \"crc-debug-hgd89\" (UID: \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\") " pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.188578 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkwh8\" (UniqueName: \"kubernetes.io/projected/b1a0ceb2-3749-4801-9dee-49b716bb7e41-kube-api-access-zkwh8\") pod \"crc-debug-hgd89\" (UID: \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\") " pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.206696 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkwh8\" (UniqueName: \"kubernetes.io/projected/b1a0ceb2-3749-4801-9dee-49b716bb7e41-kube-api-access-zkwh8\") pod \"crc-debug-hgd89\" (UID: \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\") " pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.290129 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:47 crc kubenswrapper[4851]: W0223 14:02:47.322386 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1a0ceb2_3749_4801_9dee_49b716bb7e41.slice/crio-4e5871c931880d409a8c704aab732d2f79406d0faf99195a624701462e875676 WatchSource:0}: Error finding container 4e5871c931880d409a8c704aab732d2f79406d0faf99195a624701462e875676: Status 404 returned error can't find the container with id 4e5871c931880d409a8c704aab732d2f79406d0faf99195a624701462e875676 Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.691026 4851 generic.go:334] "Generic (PLEG): container finished" podID="b1a0ceb2-3749-4801-9dee-49b716bb7e41" containerID="23886595a0da38490745294389de3b8b7a64db4490ef200c6300c147334f00d0" exitCode=0 Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.691105 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/crc-debug-hgd89" event={"ID":"b1a0ceb2-3749-4801-9dee-49b716bb7e41","Type":"ContainerDied","Data":"23886595a0da38490745294389de3b8b7a64db4490ef200c6300c147334f00d0"} Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.691135 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/crc-debug-hgd89" event={"ID":"b1a0ceb2-3749-4801-9dee-49b716bb7e41","Type":"ContainerStarted","Data":"4e5871c931880d409a8c704aab732d2f79406d0faf99195a624701462e875676"} Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.730261 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kjx6d/crc-debug-hgd89"] Feb 23 14:02:47 crc kubenswrapper[4851]: I0223 14:02:47.737379 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kjx6d/crc-debug-hgd89"] Feb 23 14:02:48 crc kubenswrapper[4851]: I0223 14:02:48.807608 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:48 crc kubenswrapper[4851]: I0223 14:02:48.926629 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkwh8\" (UniqueName: \"kubernetes.io/projected/b1a0ceb2-3749-4801-9dee-49b716bb7e41-kube-api-access-zkwh8\") pod \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\" (UID: \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\") " Feb 23 14:02:48 crc kubenswrapper[4851]: I0223 14:02:48.926738 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a0ceb2-3749-4801-9dee-49b716bb7e41-host\") pod \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\" (UID: \"b1a0ceb2-3749-4801-9dee-49b716bb7e41\") " Feb 23 14:02:48 crc kubenswrapper[4851]: I0223 14:02:48.926927 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1a0ceb2-3749-4801-9dee-49b716bb7e41-host" (OuterVolumeSpecName: "host") pod "b1a0ceb2-3749-4801-9dee-49b716bb7e41" (UID: "b1a0ceb2-3749-4801-9dee-49b716bb7e41"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 14:02:48 crc kubenswrapper[4851]: I0223 14:02:48.927500 4851 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b1a0ceb2-3749-4801-9dee-49b716bb7e41-host\") on node \"crc\" DevicePath \"\"" Feb 23 14:02:48 crc kubenswrapper[4851]: I0223 14:02:48.938237 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a0ceb2-3749-4801-9dee-49b716bb7e41-kube-api-access-zkwh8" (OuterVolumeSpecName: "kube-api-access-zkwh8") pod "b1a0ceb2-3749-4801-9dee-49b716bb7e41" (UID: "b1a0ceb2-3749-4801-9dee-49b716bb7e41"). InnerVolumeSpecName "kube-api-access-zkwh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:02:49 crc kubenswrapper[4851]: I0223 14:02:49.028912 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkwh8\" (UniqueName: \"kubernetes.io/projected/b1a0ceb2-3749-4801-9dee-49b716bb7e41-kube-api-access-zkwh8\") on node \"crc\" DevicePath \"\"" Feb 23 14:02:49 crc kubenswrapper[4851]: I0223 14:02:49.709170 4851 scope.go:117] "RemoveContainer" containerID="23886595a0da38490745294389de3b8b7a64db4490ef200c6300c147334f00d0" Feb 23 14:02:49 crc kubenswrapper[4851]: I0223 14:02:49.709195 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/crc-debug-hgd89" Feb 23 14:02:49 crc kubenswrapper[4851]: I0223 14:02:49.980463 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a0ceb2-3749-4801-9dee-49b716bb7e41" path="/var/lib/kubelet/pods/b1a0ceb2-3749-4801-9dee-49b716bb7e41/volumes" Feb 23 14:02:59 crc kubenswrapper[4851]: I0223 14:02:59.969017 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:02:59 crc kubenswrapper[4851]: E0223 14:02:59.969762 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:03:02 crc kubenswrapper[4851]: I0223 14:03:02.301130 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bdd9b889b-qd9cm_6134ed19-8856-4c53-b30c-eee8089381fb/barbican-api/0.log" Feb 23 14:03:02 crc kubenswrapper[4851]: I0223 14:03:02.439395 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bdd9b889b-qd9cm_6134ed19-8856-4c53-b30c-eee8089381fb/barbican-api-log/0.log" Feb 23 14:03:02 crc kubenswrapper[4851]: I0223 14:03:02.516019 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-66757bd65d-pn2zh_42f43676-ccd3-45e3-b729-ab33430aca9a/barbican-keystone-listener/0.log" Feb 23 14:03:02 crc kubenswrapper[4851]: I0223 14:03:02.558167 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-66757bd65d-pn2zh_42f43676-ccd3-45e3-b729-ab33430aca9a/barbican-keystone-listener-log/0.log" Feb 23 14:03:02 crc kubenswrapper[4851]: I0223 14:03:02.689001 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7546f4466c-vlsxg_fc271fbe-58c9-4eca-adfd-63ff51aa46fa/barbican-worker/0.log" Feb 23 14:03:02 crc kubenswrapper[4851]: I0223 14:03:02.693782 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7546f4466c-vlsxg_fc271fbe-58c9-4eca-adfd-63ff51aa46fa/barbican-worker-log/0.log" Feb 23 14:03:02 crc kubenswrapper[4851]: I0223 14:03:02.876535 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9_a83f6021-68fd-4a69-8d49-534de4546eee/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:02 crc kubenswrapper[4851]: I0223 14:03:02.946966 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b0421c96-8b66-48fd-9778-da16d4eb8ef0/ceilometer-central-agent/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.036459 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b0421c96-8b66-48fd-9778-da16d4eb8ef0/proxy-httpd/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.048924 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b0421c96-8b66-48fd-9778-da16d4eb8ef0/ceilometer-notification-agent/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.069260 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b0421c96-8b66-48fd-9778-da16d4eb8ef0/sg-core/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.260053 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c260317a-0cb6-475e-b780-50f6de86dda2/cinder-api-log/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.295948 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c260317a-0cb6-475e-b780-50f6de86dda2/cinder-api/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.368515 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2894d16c-17aa-4037-afa2-37081858ab01/cinder-scheduler/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.472260 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2894d16c-17aa-4037-afa2-37081858ab01/probe/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.519404 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz_1a7542f5-0c08-40fc-a218-f196e7769853/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.806941 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-vjdpf_b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f/init/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.821808 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-4fdws_75449ea8-fea6-480f-8a8c-10d24081a76f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.923271 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-vjdpf_b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f/dnsmasq-dns/0.log" Feb 23 14:03:03 crc kubenswrapper[4851]: I0223 14:03:03.941664 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-vjdpf_b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f/init/0.log" Feb 23 14:03:04 crc kubenswrapper[4851]: I0223 14:03:04.039145 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx_1a88cbca-158c-4879-a5ef-48b9714a4043/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:04 crc kubenswrapper[4851]: I0223 14:03:04.137971 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_839d4518-f84b-4a2c-81eb-c0112da70e71/glance-httpd/0.log" Feb 23 14:03:04 crc kubenswrapper[4851]: I0223 14:03:04.212114 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_839d4518-f84b-4a2c-81eb-c0112da70e71/glance-log/0.log" Feb 23 14:03:04 crc kubenswrapper[4851]: I0223 14:03:04.300410 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_17103db4-b198-4896-8bec-1e1d1bf8efa1/glance-httpd/0.log" Feb 23 14:03:04 crc kubenswrapper[4851]: I0223 14:03:04.331270 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_17103db4-b198-4896-8bec-1e1d1bf8efa1/glance-log/0.log" Feb 23 14:03:04 crc kubenswrapper[4851]: I0223 14:03:04.526848 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-64f4c4f478-f578z_1c52d079-d9d5-469e-9319-08266bea1f82/horizon/0.log" Feb 23 14:03:04 crc kubenswrapper[4851]: I0223 14:03:04.725298 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh_fde15470-10ed-44ef-8ba7-a03c9046f828/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:04 crc kubenswrapper[4851]: I0223 14:03:04.778303 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-64f4c4f478-f578z_1c52d079-d9d5-469e-9319-08266bea1f82/horizon-log/0.log" Feb 23 14:03:04 crc kubenswrapper[4851]: I0223 14:03:04.863863 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rxvsh_bc81c277-18fa-44d5-8211-37e2b5ca5069/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:05 crc kubenswrapper[4851]: I0223 14:03:05.080510 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29530921-92sp9_9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e/keystone-cron/0.log" Feb 23 14:03:05 crc kubenswrapper[4851]: I0223 14:03:05.125048 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b7f866994-tdwdz_20cfa6bd-a3d2-4e2c-9655-6b4db78b1771/keystone-api/0.log" Feb 23 14:03:05 crc kubenswrapper[4851]: I0223 14:03:05.232857 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a0a55625-8b81-4ce9-afb2-2220598ce375/kube-state-metrics/0.log" Feb 23 14:03:05 crc kubenswrapper[4851]: I0223 14:03:05.393650 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5_7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:05 crc kubenswrapper[4851]: I0223 14:03:05.749714 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bd58878f7-xhsz6_293a3d32-d143-4600-bb4e-50f2c5783f67/neutron-api/0.log" Feb 23 14:03:05 crc kubenswrapper[4851]: I0223 14:03:05.808876 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bd58878f7-xhsz6_293a3d32-d143-4600-bb4e-50f2c5783f67/neutron-httpd/0.log" Feb 23 14:03:06 crc kubenswrapper[4851]: I0223 14:03:06.021499 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v_01be1f4b-d5f3-4dbe-b528-118617cdad1e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:06 crc kubenswrapper[4851]: I0223 14:03:06.474548 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f7011aa2-a15d-4c99-b0a1-ae8d530b84c2/nova-cell0-conductor-conductor/0.log" Feb 23 14:03:06 crc kubenswrapper[4851]: I0223 14:03:06.520649 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c5dd0b3-902e-4156-9538-fccbb6f319ae/nova-api-log/0.log" Feb 23 14:03:06 crc kubenswrapper[4851]: I0223 14:03:06.628442 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c5dd0b3-902e-4156-9538-fccbb6f319ae/nova-api-api/0.log" Feb 23 14:03:06 crc kubenswrapper[4851]: I0223 14:03:06.760638 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_b5c64fb8-ea75-4ede-b285-7aeb434b96d4/nova-cell1-conductor-conductor/0.log" Feb 23 14:03:06 crc kubenswrapper[4851]: I0223 14:03:06.810691 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0284ac99-e112-44af-b198-eb9d42478701/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 14:03:07 crc kubenswrapper[4851]: I0223 14:03:07.017717 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-q95kg_85e1b392-9aa6-4cd1-93b0-fa3587de47ac/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:07 crc kubenswrapper[4851]: I0223 14:03:07.191271 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f06b9e12-5e93-4ed8-80f1-733ce28508c1/nova-metadata-log/0.log" Feb 23 14:03:07 crc kubenswrapper[4851]: I0223 14:03:07.421860 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_dcffae8a-b5fd-49bf-9316-1cc871d0568c/nova-scheduler-scheduler/0.log" Feb 23 14:03:07 crc kubenswrapper[4851]: I0223 14:03:07.477862 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cb3d0e2c-9427-4585-8f01-0e1640feca9a/mysql-bootstrap/0.log" Feb 23 14:03:07 crc kubenswrapper[4851]: I0223 14:03:07.705995 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cb3d0e2c-9427-4585-8f01-0e1640feca9a/galera/0.log" Feb 23 14:03:07 crc kubenswrapper[4851]: I0223 14:03:07.720264 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cb3d0e2c-9427-4585-8f01-0e1640feca9a/mysql-bootstrap/0.log" Feb 23 14:03:07 crc kubenswrapper[4851]: I0223 14:03:07.896839 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a0d61403-fda9-4081-8c39-32ff86cc879c/mysql-bootstrap/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.108819 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f06b9e12-5e93-4ed8-80f1-733ce28508c1/nova-metadata-metadata/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.137222 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a0d61403-fda9-4081-8c39-32ff86cc879c/mysql-bootstrap/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.190651 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a0d61403-fda9-4081-8c39-32ff86cc879c/galera/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.336501 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_86b670f3-6886-4a48-b0ec-a109e93c87a0/openstackclient/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.361615 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2rf22_f366da8b-d0d3-411e-afec-53af288b0c42/ovn-controller/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.529957 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bnnms_a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d/openstack-network-exporter/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.636396 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-42p6n_d88acd5e-87c7-4b36-9aad-d20d44b7d0bf/ovsdb-server-init/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.805113 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-42p6n_d88acd5e-87c7-4b36-9aad-d20d44b7d0bf/ovsdb-server-init/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.807441 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-42p6n_d88acd5e-87c7-4b36-9aad-d20d44b7d0bf/ovsdb-server/0.log" Feb 23 14:03:08 crc kubenswrapper[4851]: I0223 14:03:08.847589 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-42p6n_d88acd5e-87c7-4b36-9aad-d20d44b7d0bf/ovs-vswitchd/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.047437 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5frpq_b70c39f9-b146-4980-bb34-0034ed5b8b86/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.093265 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2/ovn-northd/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.127650 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2/openstack-network-exporter/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.402678 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d6c7eb0b-bab9-47af-b0e9-fd539479e252/ovsdbserver-nb/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.446397 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d6c7eb0b-bab9-47af-b0e9-fd539479e252/openstack-network-exporter/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.542687 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_68f10652-af07-4024-b8b6-91d8e8974144/openstack-network-exporter/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.616374 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_68f10652-af07-4024-b8b6-91d8e8974144/ovsdbserver-sb/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.788994 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d6d46b468-7drjb_4c14e85a-4380-49f8-8311-abcaa3587c47/placement-api/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.803126 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d6d46b468-7drjb_4c14e85a-4380-49f8-8311-abcaa3587c47/placement-log/0.log" Feb 23 14:03:09 crc kubenswrapper[4851]: I0223 14:03:09.949543 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d2aa1b0e-e4a7-4365-99c9-4e521e896925/setup-container/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.160871 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_44d82832-bb2c-4bfe-a9c0-a22e00484c71/setup-container/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.186234 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d2aa1b0e-e4a7-4365-99c9-4e521e896925/setup-container/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.261604 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d2aa1b0e-e4a7-4365-99c9-4e521e896925/rabbitmq/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.359707 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_44d82832-bb2c-4bfe-a9c0-a22e00484c71/setup-container/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.387926 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_44d82832-bb2c-4bfe-a9c0-a22e00484c71/rabbitmq/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.448909 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm_964cf639-e7d3-402e-80f9-d8d27ebf5db7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.674653 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zgsdq_7dc4f23f-fc11-4cf6-9740-ec259ac3823e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.674869 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj_0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.896194 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kd2p6_3a4f0b71-7653-49fa-9155-3e0d4197e087/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:10 crc kubenswrapper[4851]: I0223 14:03:10.985585 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-thkvh_6d0c05be-7530-42cf-86e9-e0d67e24ce4d/ssh-known-hosts-edpm-deployment/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.230737 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5ccf5dc859-8drcp_70c040ea-0409-4501-9416-f1f40c5c6882/proxy-server/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.265891 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5ccf5dc859-8drcp_70c040ea-0409-4501-9416-f1f40c5c6882/proxy-httpd/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.344015 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-947gb_365ea813-ed43-4771-a20a-d8ad58487d86/swift-ring-rebalance/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.432135 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/account-auditor/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.487529 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/account-reaper/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.587888 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/account-replicator/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.647719 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/account-server/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.691431 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/container-auditor/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.739016 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/container-replicator/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.818503 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/container-server/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.833360 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/container-updater/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.919930 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-auditor/0.log" Feb 23 14:03:11 crc kubenswrapper[4851]: I0223 14:03:11.944857 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-expirer/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.061074 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-server/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.078776 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-replicator/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.152870 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-updater/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.191168 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/rsync/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.316413 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/swift-recon-cron/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.454198 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-mhk45_ec787d1d-3f44-445b-a2ad-0d0b9ce7f476/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.591356 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_85d7dda0-1545-4b56-9694-c704cfec078c/tempest-tests-tempest-tests-runner/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.616927 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_68dbe829-aaf9-45eb-9b13-1c7e73a34cb6/test-operator-logs-container/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.796681 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7phqr_33d023b8-6967-4bc9-813e-08892dfa7107/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:03:12 crc kubenswrapper[4851]: I0223 14:03:12.969488 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:03:12 crc kubenswrapper[4851]: E0223 14:03:12.969959 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:03:19 crc kubenswrapper[4851]: I0223 14:03:19.588620 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_aee353b8-8a37-4055-a016-2c1aac2cf20b/memcached/0.log" Feb 23 14:03:23 crc kubenswrapper[4851]: I0223 14:03:23.968785 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:03:23 crc kubenswrapper[4851]: E0223 14:03:23.969283 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:03:35 crc kubenswrapper[4851]: I0223 14:03:35.450687 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/util/0.log" Feb 23 14:03:35 crc kubenswrapper[4851]: I0223 14:03:35.588829 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/util/0.log" Feb 23 14:03:35 crc kubenswrapper[4851]: I0223 14:03:35.607501 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/pull/0.log" Feb 23 14:03:35 crc kubenswrapper[4851]: I0223 14:03:35.649611 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/pull/0.log" Feb 23 14:03:35 crc kubenswrapper[4851]: I0223 14:03:35.851805 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/extract/0.log" Feb 23 14:03:35 crc kubenswrapper[4851]: I0223 14:03:35.863647 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/pull/0.log" Feb 23 14:03:35 crc kubenswrapper[4851]: I0223 14:03:35.886500 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/util/0.log" Feb 23 14:03:35 crc kubenswrapper[4851]: I0223 14:03:35.975826 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:03:35 crc kubenswrapper[4851]: E0223 14:03:35.976221 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:03:36 crc kubenswrapper[4851]: I0223 14:03:36.308028 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-k8pws_115cc313-eea6-40cd-9e8a-a7205e83cc07/manager/0.log" Feb 23 14:03:36 crc kubenswrapper[4851]: I0223 14:03:36.610745 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-rm79x_f17a63ea-4b87-429b-8c90-58790c572b9e/manager/0.log" Feb 23 14:03:36 crc kubenswrapper[4851]: I0223 14:03:36.753102 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-tvt8g_6fb817cf-5b9d-4879-a997-cd3f1d99db3c/manager/0.log" Feb 23 14:03:37 crc kubenswrapper[4851]: I0223 14:03:37.042988 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-8wdqc_9abe19ef-7cfa-43dd-983c-bcef5a540100/manager/0.log" Feb 23 14:03:37 crc kubenswrapper[4851]: I0223 14:03:37.412453 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-bctck_ef730879-0a7d-4e4a-925e-8ef30c366d64/manager/0.log" Feb 23 14:03:37 crc kubenswrapper[4851]: I0223 14:03:37.582742 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-2xlr5_ca30fe6b-5b33-4e6e-acb5-93a49ae9257d/manager/0.log" Feb 23 14:03:37 crc kubenswrapper[4851]: I0223 14:03:37.675484 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-b827v_834a522f-ca03-403d-8402-679845f7c6c3/manager/0.log" Feb 23 14:03:37 crc kubenswrapper[4851]: I0223 14:03:37.829824 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-pd5bf_71fd5f4f-a9fc-4242-813a-3fb7d5827c41/manager/0.log" Feb 23 14:03:37 crc kubenswrapper[4851]: I0223 14:03:37.923178 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-sd26k_0dbb5228-ae4a-427d-97a2-3768b460e134/manager/0.log" Feb 23 14:03:38 crc kubenswrapper[4851]: I0223 14:03:38.103050 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-vrjqg_40f2272b-7e63-4666-b858-9722a0af16c8/manager/0.log" Feb 23 14:03:38 crc kubenswrapper[4851]: I0223 14:03:38.413752 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-rbgkf_84a8d9f7-24b2-4f08-a917-b614dc537ffe/manager/0.log" Feb 23 14:03:38 crc kubenswrapper[4851]: I0223 14:03:38.469218 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-65dws_33df439b-30ca-4397-a992-be2de607477a/manager/0.log" Feb 23 14:03:38 crc kubenswrapper[4851]: I0223 14:03:38.586185 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-x2gtd_fbc0edce-88b5-4ddc-8495-01e33e7a7753/manager/0.log" Feb 23 14:03:38 crc kubenswrapper[4851]: I0223 14:03:38.775274 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm_e289a048-8c1a-4349-8b3b-8f3628e23bdc/manager/0.log" Feb 23 14:03:39 crc kubenswrapper[4851]: I0223 14:03:39.119994 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-567cd64b9b-qlxlf_5723937f-de2b-455f-9015-e13595ee88e3/operator/0.log" Feb 23 14:03:39 crc kubenswrapper[4851]: I0223 14:03:39.314383 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rfcf2_edc46ca6-ff8f-4f31-981d-633b7a3766b1/registry-server/0.log" Feb 23 14:03:39 crc kubenswrapper[4851]: I0223 14:03:39.653044 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-x9kh8_ddcb4697-a6af-4baa-bd78-ae1f3b47c6af/manager/0.log" Feb 23 14:03:39 crc kubenswrapper[4851]: I0223 14:03:39.748882 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-hktpx_f9f540e9-5c10-4e33-b283-328276817914/manager/0.log" Feb 23 14:03:39 crc kubenswrapper[4851]: I0223 14:03:39.859064 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-sv2f9_18ea2332-4904-4213-9ba2-c678a2125b37/operator/0.log" Feb 23 14:03:40 crc kubenswrapper[4851]: I0223 14:03:40.179948 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-bwbpw_946a66f3-be29-4e8b-a800-637ef24a5694/manager/0.log" Feb 23 14:03:40 crc kubenswrapper[4851]: I0223 14:03:40.257460 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-khm2l_2a0bac92-ab56-4f67-a3a8-09ea4de25ae5/manager/0.log" Feb 23 14:03:40 crc kubenswrapper[4851]: I0223 14:03:40.381133 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-zktk2_6769f01c-bcc7-4e3e-a791-0fa315f82b37/manager/0.log" Feb 23 14:03:40 crc kubenswrapper[4851]: I0223 14:03:40.507282 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-gdn69_ea9b35cb-5758-42d8-8877-ceb1e19eb751/manager/0.log" Feb 23 14:03:40 crc kubenswrapper[4851]: I0223 14:03:40.929221 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68bc894585-xr5dt_7a7fd548-a78f-4096-b68a-2bc28b937e96/manager/0.log" Feb 23 14:03:42 crc kubenswrapper[4851]: I0223 14:03:42.726946 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-nhj9r_c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44/manager/0.log" Feb 23 14:03:49 crc kubenswrapper[4851]: I0223 14:03:49.969464 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:03:49 crc kubenswrapper[4851]: E0223 14:03:49.970251 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:03:58 crc kubenswrapper[4851]: I0223 14:03:58.493223 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-klwfn_62353140-dab7-459f-b0d4-c796087cb3f9/control-plane-machine-set-operator/0.log" Feb 23 14:03:58 crc kubenswrapper[4851]: I0223 14:03:58.665706 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dw6fk_8f8399a9-b50e-4ccb-8ab8-3e245ab4f229/kube-rbac-proxy/0.log" Feb 23 14:03:58 crc kubenswrapper[4851]: I0223 14:03:58.690985 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dw6fk_8f8399a9-b50e-4ccb-8ab8-3e245ab4f229/machine-api-operator/0.log" Feb 23 14:04:00 crc kubenswrapper[4851]: I0223 14:04:00.968665 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:04:00 crc kubenswrapper[4851]: E0223 14:04:00.969240 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:04:10 crc kubenswrapper[4851]: I0223 14:04:10.478297 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hrfw4_4e7d2a84-a59b-4489-98c0-78b2b3dc607c/cert-manager-controller/0.log" Feb 23 14:04:10 crc kubenswrapper[4851]: I0223 14:04:10.912651 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qwxvn_af206ef2-9ee3-4eeb-81c6-5a82bef57eb0/cert-manager-cainjector/0.log" Feb 23 14:04:10 crc kubenswrapper[4851]: I0223 14:04:10.946171 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-k2vvq_b8e00a19-b1f6-4672-84e7-cc8abd468123/cert-manager-webhook/0.log" Feb 23 14:04:15 crc kubenswrapper[4851]: I0223 14:04:15.974944 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:04:15 crc kubenswrapper[4851]: E0223 14:04:15.975673 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:04:23 crc kubenswrapper[4851]: I0223 14:04:23.296405 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-qm7j9_cce85f53-7343-48af-8c40-275e87fbc140/nmstate-console-plugin/0.log" Feb 23 14:04:23 crc kubenswrapper[4851]: I0223 14:04:23.493525 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gdkg9_375d9b3b-340d-4b74-b352-74ac68607ad8/nmstate-handler/0.log" Feb 23 14:04:23 crc kubenswrapper[4851]: I0223 14:04:23.529150 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jmkxx_c4210a5b-8df0-4ccc-9811-5a2a831c2fa1/kube-rbac-proxy/0.log" Feb 23 14:04:23 crc kubenswrapper[4851]: I0223 14:04:23.624658 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jmkxx_c4210a5b-8df0-4ccc-9811-5a2a831c2fa1/nmstate-metrics/0.log" Feb 23 14:04:23 crc kubenswrapper[4851]: I0223 14:04:23.690863 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-4k8hs_238cc1b8-1f38-43fa-92ca-bf3561e793fd/nmstate-operator/0.log" Feb 23 14:04:23 crc kubenswrapper[4851]: I0223 14:04:23.822777 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-mf27m_730c196a-16c5-4564-a5e1-db3f9fdd31d7/nmstate-webhook/0.log" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.229906 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ptwgk"] Feb 23 14:04:24 crc kubenswrapper[4851]: E0223 14:04:24.230359 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a0ceb2-3749-4801-9dee-49b716bb7e41" containerName="container-00" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.230381 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a0ceb2-3749-4801-9dee-49b716bb7e41" containerName="container-00" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.230608 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a0ceb2-3749-4801-9dee-49b716bb7e41" containerName="container-00" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.232198 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.265572 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptwgk"] Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.378548 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-utilities\") pod \"community-operators-ptwgk\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.378642 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-catalog-content\") pod \"community-operators-ptwgk\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.378697 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pzzs\" (UniqueName: \"kubernetes.io/projected/54156d63-f2ca-4e74-99ac-5306682b4001-kube-api-access-7pzzs\") pod \"community-operators-ptwgk\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.481750 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-utilities\") pod \"community-operators-ptwgk\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.481869 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-catalog-content\") pod \"community-operators-ptwgk\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.481943 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pzzs\" (UniqueName: \"kubernetes.io/projected/54156d63-f2ca-4e74-99ac-5306682b4001-kube-api-access-7pzzs\") pod \"community-operators-ptwgk\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.491712 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-catalog-content\") pod \"community-operators-ptwgk\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.492265 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-utilities\") pod \"community-operators-ptwgk\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.516082 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pzzs\" (UniqueName: \"kubernetes.io/projected/54156d63-f2ca-4e74-99ac-5306682b4001-kube-api-access-7pzzs\") pod \"community-operators-ptwgk\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:24 crc kubenswrapper[4851]: I0223 14:04:24.592981 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:25 crc kubenswrapper[4851]: I0223 14:04:25.103320 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptwgk"] Feb 23 14:04:25 crc kubenswrapper[4851]: I0223 14:04:25.563151 4851 generic.go:334] "Generic (PLEG): container finished" podID="54156d63-f2ca-4e74-99ac-5306682b4001" containerID="96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e" exitCode=0 Feb 23 14:04:25 crc kubenswrapper[4851]: I0223 14:04:25.563260 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwgk" event={"ID":"54156d63-f2ca-4e74-99ac-5306682b4001","Type":"ContainerDied","Data":"96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e"} Feb 23 14:04:25 crc kubenswrapper[4851]: I0223 14:04:25.563482 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwgk" event={"ID":"54156d63-f2ca-4e74-99ac-5306682b4001","Type":"ContainerStarted","Data":"1dcfb4ae4f1f232c8a6585dd7a2fd69b8d94670f8f21a302f64561b57753a62c"} Feb 23 14:04:28 crc kubenswrapper[4851]: I0223 14:04:28.599436 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwgk" event={"ID":"54156d63-f2ca-4e74-99ac-5306682b4001","Type":"ContainerStarted","Data":"b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d"} Feb 23 14:04:29 crc kubenswrapper[4851]: I0223 14:04:29.608080 4851 generic.go:334] "Generic (PLEG): container finished" podID="54156d63-f2ca-4e74-99ac-5306682b4001" containerID="b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d" exitCode=0 Feb 23 14:04:29 crc kubenswrapper[4851]: I0223 14:04:29.608316 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwgk" event={"ID":"54156d63-f2ca-4e74-99ac-5306682b4001","Type":"ContainerDied","Data":"b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d"} Feb 23 14:04:29 crc kubenswrapper[4851]: I0223 14:04:29.969028 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:04:29 crc kubenswrapper[4851]: E0223 14:04:29.969269 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:04:31 crc kubenswrapper[4851]: I0223 14:04:31.637036 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwgk" event={"ID":"54156d63-f2ca-4e74-99ac-5306682b4001","Type":"ContainerStarted","Data":"1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063"} Feb 23 14:04:31 crc kubenswrapper[4851]: I0223 14:04:31.660708 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ptwgk" podStartSLOduration=2.266019105 podStartE2EDuration="7.660681375s" podCreationTimestamp="2026-02-23 14:04:24 +0000 UTC" firstStartedPulling="2026-02-23 14:04:25.564869617 +0000 UTC m=+3420.246573295" lastFinishedPulling="2026-02-23 14:04:30.959531887 +0000 UTC m=+3425.641235565" observedRunningTime="2026-02-23 14:04:31.653247975 +0000 UTC m=+3426.334951663" watchObservedRunningTime="2026-02-23 14:04:31.660681375 +0000 UTC m=+3426.342385053" Feb 23 14:04:34 crc kubenswrapper[4851]: I0223 14:04:34.593091 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:34 crc kubenswrapper[4851]: I0223 14:04:34.594464 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:34 crc kubenswrapper[4851]: I0223 14:04:34.652117 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:40 crc kubenswrapper[4851]: I0223 14:04:40.969252 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:04:40 crc kubenswrapper[4851]: E0223 14:04:40.970119 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:04:44 crc kubenswrapper[4851]: I0223 14:04:44.645396 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:44 crc kubenswrapper[4851]: I0223 14:04:44.700324 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptwgk"] Feb 23 14:04:44 crc kubenswrapper[4851]: I0223 14:04:44.738798 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ptwgk" podUID="54156d63-f2ca-4e74-99ac-5306682b4001" containerName="registry-server" containerID="cri-o://1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063" gracePeriod=2 Feb 23 14:04:45 crc kubenswrapper[4851]: E0223 14:04:45.006818 4851 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54156d63_f2ca_4e74_99ac_5306682b4001.slice/crio-1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063.scope\": RecentStats: unable to find data in memory cache]" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.257347 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.375783 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-utilities\") pod \"54156d63-f2ca-4e74-99ac-5306682b4001\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.376531 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-catalog-content\") pod \"54156d63-f2ca-4e74-99ac-5306682b4001\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.376582 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pzzs\" (UniqueName: \"kubernetes.io/projected/54156d63-f2ca-4e74-99ac-5306682b4001-kube-api-access-7pzzs\") pod \"54156d63-f2ca-4e74-99ac-5306682b4001\" (UID: \"54156d63-f2ca-4e74-99ac-5306682b4001\") " Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.376904 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-utilities" (OuterVolumeSpecName: "utilities") pod "54156d63-f2ca-4e74-99ac-5306682b4001" (UID: "54156d63-f2ca-4e74-99ac-5306682b4001"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.377156 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.383430 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54156d63-f2ca-4e74-99ac-5306682b4001-kube-api-access-7pzzs" (OuterVolumeSpecName: "kube-api-access-7pzzs") pod "54156d63-f2ca-4e74-99ac-5306682b4001" (UID: "54156d63-f2ca-4e74-99ac-5306682b4001"). InnerVolumeSpecName "kube-api-access-7pzzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.432003 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54156d63-f2ca-4e74-99ac-5306682b4001" (UID: "54156d63-f2ca-4e74-99ac-5306682b4001"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.478323 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54156d63-f2ca-4e74-99ac-5306682b4001-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.478372 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pzzs\" (UniqueName: \"kubernetes.io/projected/54156d63-f2ca-4e74-99ac-5306682b4001-kube-api-access-7pzzs\") on node \"crc\" DevicePath \"\"" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.748980 4851 generic.go:334] "Generic (PLEG): container finished" podID="54156d63-f2ca-4e74-99ac-5306682b4001" containerID="1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063" exitCode=0 Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.749038 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptwgk" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.749038 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwgk" event={"ID":"54156d63-f2ca-4e74-99ac-5306682b4001","Type":"ContainerDied","Data":"1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063"} Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.749112 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptwgk" event={"ID":"54156d63-f2ca-4e74-99ac-5306682b4001","Type":"ContainerDied","Data":"1dcfb4ae4f1f232c8a6585dd7a2fd69b8d94670f8f21a302f64561b57753a62c"} Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.749138 4851 scope.go:117] "RemoveContainer" containerID="1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.793356 4851 scope.go:117] "RemoveContainer" containerID="b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.805745 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptwgk"] Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.815167 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ptwgk"] Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.818699 4851 scope.go:117] "RemoveContainer" containerID="96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.864058 4851 scope.go:117] "RemoveContainer" containerID="1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063" Feb 23 14:04:45 crc kubenswrapper[4851]: E0223 14:04:45.864572 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063\": container with ID starting with 1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063 not found: ID does not exist" containerID="1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.864601 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063"} err="failed to get container status \"1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063\": rpc error: code = NotFound desc = could not find container \"1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063\": container with ID starting with 1ba2f1ebc02d9ff6ac677348901466bc8ae89efefa5c074bccddafbe627d6063 not found: ID does not exist" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.864621 4851 scope.go:117] "RemoveContainer" containerID="b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d" Feb 23 14:04:45 crc kubenswrapper[4851]: E0223 14:04:45.864878 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d\": container with ID starting with b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d not found: ID does not exist" containerID="b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.864903 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d"} err="failed to get container status \"b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d\": rpc error: code = NotFound desc = could not find container \"b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d\": container with ID starting with b5479882937e2f52386e1995d8bd9b65d26412679e8a6054a323f4b7cedf773d not found: ID does not exist" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.864917 4851 scope.go:117] "RemoveContainer" containerID="96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e" Feb 23 14:04:45 crc kubenswrapper[4851]: E0223 14:04:45.865164 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e\": container with ID starting with 96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e not found: ID does not exist" containerID="96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.865185 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e"} err="failed to get container status \"96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e\": rpc error: code = NotFound desc = could not find container \"96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e\": container with ID starting with 96eb886c8acce092ff1ab8083da28719fba5247c236aa22c2fa36083f8e1145e not found: ID does not exist" Feb 23 14:04:45 crc kubenswrapper[4851]: I0223 14:04:45.978511 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54156d63-f2ca-4e74-99ac-5306682b4001" path="/var/lib/kubelet/pods/54156d63-f2ca-4e74-99ac-5306682b4001/volumes" Feb 23 14:04:53 crc kubenswrapper[4851]: I0223 14:04:53.563039 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-88gf5_2911e001-3b48-4ffc-9681-100739828235/kube-rbac-proxy/0.log" Feb 23 14:04:53 crc kubenswrapper[4851]: I0223 14:04:53.607929 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-88gf5_2911e001-3b48-4ffc-9681-100739828235/controller/0.log" Feb 23 14:04:53 crc kubenswrapper[4851]: I0223 14:04:53.769784 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-frr-files/0.log" Feb 23 14:04:53 crc kubenswrapper[4851]: I0223 14:04:53.954351 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-metrics/0.log" Feb 23 14:04:53 crc kubenswrapper[4851]: I0223 14:04:53.972628 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-reloader/0.log" Feb 23 14:04:53 crc kubenswrapper[4851]: I0223 14:04:53.973459 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-reloader/0.log" Feb 23 14:04:53 crc kubenswrapper[4851]: I0223 14:04:53.996198 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-frr-files/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.183717 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-frr-files/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.195426 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-metrics/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.199997 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-metrics/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.213922 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-reloader/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.394267 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-reloader/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.396959 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/controller/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.420945 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-frr-files/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.429988 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-metrics/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.592910 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/frr-metrics/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.606980 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/kube-rbac-proxy/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.719387 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/kube-rbac-proxy-frr/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.784233 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/reloader/0.log" Feb 23 14:04:54 crc kubenswrapper[4851]: I0223 14:04:54.917739 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-8qsz9_21b51896-5127-4eef-8f88-87b1e811103c/frr-k8s-webhook-server/0.log" Feb 23 14:04:55 crc kubenswrapper[4851]: I0223 14:04:55.162026 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58d4d555d4-9b64v_0926a535-2dd4-4e82-9bff-6f806330985a/manager/0.log" Feb 23 14:04:55 crc kubenswrapper[4851]: I0223 14:04:55.201028 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5d6f8cc6fd-wcv5v_e9ea5798-bfec-4380-b5db-eee20abfe719/webhook-server/0.log" Feb 23 14:04:55 crc kubenswrapper[4851]: I0223 14:04:55.421774 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fvlxq_44188c33-1cb1-4c27-8314-4431469de3bb/kube-rbac-proxy/0.log" Feb 23 14:04:55 crc kubenswrapper[4851]: I0223 14:04:55.787288 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/frr/0.log" Feb 23 14:04:55 crc kubenswrapper[4851]: I0223 14:04:55.838353 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fvlxq_44188c33-1cb1-4c27-8314-4431469de3bb/speaker/0.log" Feb 23 14:04:55 crc kubenswrapper[4851]: I0223 14:04:55.975367 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:04:55 crc kubenswrapper[4851]: E0223 14:04:55.975705 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:05:07 crc kubenswrapper[4851]: I0223 14:05:07.969586 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:05:07 crc kubenswrapper[4851]: E0223 14:05:07.970383 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:05:08 crc kubenswrapper[4851]: I0223 14:05:08.304395 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/util/0.log" Feb 23 14:05:08 crc kubenswrapper[4851]: I0223 14:05:08.533231 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/pull/0.log" Feb 23 14:05:08 crc kubenswrapper[4851]: I0223 14:05:08.585393 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/util/0.log" Feb 23 14:05:08 crc kubenswrapper[4851]: I0223 14:05:08.598202 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/pull/0.log" Feb 23 14:05:08 crc kubenswrapper[4851]: I0223 14:05:08.742705 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/util/0.log" Feb 23 14:05:08 crc kubenswrapper[4851]: I0223 14:05:08.771162 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/extract/0.log" Feb 23 14:05:08 crc kubenswrapper[4851]: I0223 14:05:08.801282 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/pull/0.log" Feb 23 14:05:08 crc kubenswrapper[4851]: I0223 14:05:08.913780 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-utilities/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.108640 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-content/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.119481 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-utilities/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.146623 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-content/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.273686 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-utilities/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.293504 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-content/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.516146 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-utilities/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.742818 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-content/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.780214 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/registry-server/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.787197 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-utilities/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.791920 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-content/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.927854 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-utilities/0.log" Feb 23 14:05:09 crc kubenswrapper[4851]: I0223 14:05:09.986139 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-content/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.118147 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/util/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.340553 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/pull/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.365038 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/pull/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.476578 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/util/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.520167 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/registry-server/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.653362 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/pull/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.658800 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/util/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.704059 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/extract/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.841479 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jxzb2_0d8139b6-0c9b-48cf-b664-44304568f2d1/marketplace-operator/0.log" Feb 23 14:05:10 crc kubenswrapper[4851]: I0223 14:05:10.898030 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-utilities/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.047842 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-utilities/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.087045 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-content/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.113888 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-content/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.263666 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-content/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.280410 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-utilities/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.354640 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/registry-server/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.643684 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-utilities/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.752885 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-utilities/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.774400 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-content/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.780825 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-content/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.911780 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-content/0.log" Feb 23 14:05:11 crc kubenswrapper[4851]: I0223 14:05:11.950161 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-utilities/0.log" Feb 23 14:05:12 crc kubenswrapper[4851]: I0223 14:05:12.396645 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/registry-server/0.log" Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.816970 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ck2kr"] Feb 23 14:05:14 crc kubenswrapper[4851]: E0223 14:05:14.818467 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54156d63-f2ca-4e74-99ac-5306682b4001" containerName="extract-content" Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.818570 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="54156d63-f2ca-4e74-99ac-5306682b4001" containerName="extract-content" Feb 23 14:05:14 crc kubenswrapper[4851]: E0223 14:05:14.818666 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54156d63-f2ca-4e74-99ac-5306682b4001" containerName="extract-utilities" Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.818725 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="54156d63-f2ca-4e74-99ac-5306682b4001" containerName="extract-utilities" Feb 23 14:05:14 crc kubenswrapper[4851]: E0223 14:05:14.818799 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54156d63-f2ca-4e74-99ac-5306682b4001" containerName="registry-server" Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.818864 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="54156d63-f2ca-4e74-99ac-5306682b4001" containerName="registry-server" Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.819143 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="54156d63-f2ca-4e74-99ac-5306682b4001" containerName="registry-server" Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.820988 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.828278 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ck2kr"] Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.925363 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-catalog-content\") pod \"certified-operators-ck2kr\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.925477 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jln\" (UniqueName: \"kubernetes.io/projected/50a913a4-7201-47a3-ba19-eafbda8cafbb-kube-api-access-v9jln\") pod \"certified-operators-ck2kr\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:14 crc kubenswrapper[4851]: I0223 14:05:14.925553 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-utilities\") pod \"certified-operators-ck2kr\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.026911 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-catalog-content\") pod \"certified-operators-ck2kr\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.027051 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9jln\" (UniqueName: \"kubernetes.io/projected/50a913a4-7201-47a3-ba19-eafbda8cafbb-kube-api-access-v9jln\") pod \"certified-operators-ck2kr\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.027105 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-utilities\") pod \"certified-operators-ck2kr\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.027298 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-catalog-content\") pod \"certified-operators-ck2kr\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.028033 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-utilities\") pod \"certified-operators-ck2kr\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.063371 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9jln\" (UniqueName: \"kubernetes.io/projected/50a913a4-7201-47a3-ba19-eafbda8cafbb-kube-api-access-v9jln\") pod \"certified-operators-ck2kr\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.149009 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.659243 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ck2kr"] Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.990298 4851 generic.go:334] "Generic (PLEG): container finished" podID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerID="e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3" exitCode=0 Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.990414 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck2kr" event={"ID":"50a913a4-7201-47a3-ba19-eafbda8cafbb","Type":"ContainerDied","Data":"e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3"} Feb 23 14:05:15 crc kubenswrapper[4851]: I0223 14:05:15.990672 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck2kr" event={"ID":"50a913a4-7201-47a3-ba19-eafbda8cafbb","Type":"ContainerStarted","Data":"8f2bcbca068ad78f5ed09279df8c5a9782a357058dcd8f13cf13213c39154887"} Feb 23 14:05:17 crc kubenswrapper[4851]: I0223 14:05:17.012838 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck2kr" event={"ID":"50a913a4-7201-47a3-ba19-eafbda8cafbb","Type":"ContainerStarted","Data":"7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306"} Feb 23 14:05:18 crc kubenswrapper[4851]: I0223 14:05:18.028055 4851 generic.go:334] "Generic (PLEG): container finished" podID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerID="7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306" exitCode=0 Feb 23 14:05:18 crc kubenswrapper[4851]: I0223 14:05:18.028126 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck2kr" event={"ID":"50a913a4-7201-47a3-ba19-eafbda8cafbb","Type":"ContainerDied","Data":"7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306"} Feb 23 14:05:19 crc kubenswrapper[4851]: I0223 14:05:19.039298 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck2kr" event={"ID":"50a913a4-7201-47a3-ba19-eafbda8cafbb","Type":"ContainerStarted","Data":"13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c"} Feb 23 14:05:19 crc kubenswrapper[4851]: I0223 14:05:19.059415 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ck2kr" podStartSLOduration=2.554241036 podStartE2EDuration="5.059396654s" podCreationTimestamp="2026-02-23 14:05:14 +0000 UTC" firstStartedPulling="2026-02-23 14:05:15.992012229 +0000 UTC m=+3470.673715907" lastFinishedPulling="2026-02-23 14:05:18.497167847 +0000 UTC m=+3473.178871525" observedRunningTime="2026-02-23 14:05:19.059303782 +0000 UTC m=+3473.741007480" watchObservedRunningTime="2026-02-23 14:05:19.059396654 +0000 UTC m=+3473.741100332" Feb 23 14:05:21 crc kubenswrapper[4851]: I0223 14:05:21.969446 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:05:21 crc kubenswrapper[4851]: E0223 14:05:21.970284 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:05:25 crc kubenswrapper[4851]: I0223 14:05:25.149511 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:25 crc kubenswrapper[4851]: I0223 14:05:25.149841 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:25 crc kubenswrapper[4851]: I0223 14:05:25.202427 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:26 crc kubenswrapper[4851]: I0223 14:05:26.147030 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:26 crc kubenswrapper[4851]: I0223 14:05:26.205518 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ck2kr"] Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.119010 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ck2kr" podUID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerName="registry-server" containerID="cri-o://13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c" gracePeriod=2 Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.650108 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.684940 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9jln\" (UniqueName: \"kubernetes.io/projected/50a913a4-7201-47a3-ba19-eafbda8cafbb-kube-api-access-v9jln\") pod \"50a913a4-7201-47a3-ba19-eafbda8cafbb\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.685194 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-utilities\") pod \"50a913a4-7201-47a3-ba19-eafbda8cafbb\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.685219 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-catalog-content\") pod \"50a913a4-7201-47a3-ba19-eafbda8cafbb\" (UID: \"50a913a4-7201-47a3-ba19-eafbda8cafbb\") " Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.685988 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-utilities" (OuterVolumeSpecName: "utilities") pod "50a913a4-7201-47a3-ba19-eafbda8cafbb" (UID: "50a913a4-7201-47a3-ba19-eafbda8cafbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.713506 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a913a4-7201-47a3-ba19-eafbda8cafbb-kube-api-access-v9jln" (OuterVolumeSpecName: "kube-api-access-v9jln") pod "50a913a4-7201-47a3-ba19-eafbda8cafbb" (UID: "50a913a4-7201-47a3-ba19-eafbda8cafbb"). InnerVolumeSpecName "kube-api-access-v9jln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.737885 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50a913a4-7201-47a3-ba19-eafbda8cafbb" (UID: "50a913a4-7201-47a3-ba19-eafbda8cafbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.787100 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9jln\" (UniqueName: \"kubernetes.io/projected/50a913a4-7201-47a3-ba19-eafbda8cafbb-kube-api-access-v9jln\") on node \"crc\" DevicePath \"\"" Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.787137 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 14:05:28 crc kubenswrapper[4851]: I0223 14:05:28.787147 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a913a4-7201-47a3-ba19-eafbda8cafbb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.128692 4851 generic.go:334] "Generic (PLEG): container finished" podID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerID="13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c" exitCode=0 Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.128735 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck2kr" event={"ID":"50a913a4-7201-47a3-ba19-eafbda8cafbb","Type":"ContainerDied","Data":"13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c"} Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.128760 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ck2kr" event={"ID":"50a913a4-7201-47a3-ba19-eafbda8cafbb","Type":"ContainerDied","Data":"8f2bcbca068ad78f5ed09279df8c5a9782a357058dcd8f13cf13213c39154887"} Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.128780 4851 scope.go:117] "RemoveContainer" containerID="13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.129421 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ck2kr" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.158882 4851 scope.go:117] "RemoveContainer" containerID="7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.191398 4851 scope.go:117] "RemoveContainer" containerID="e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.203168 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ck2kr"] Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.215635 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ck2kr"] Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.228374 4851 scope.go:117] "RemoveContainer" containerID="13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c" Feb 23 14:05:29 crc kubenswrapper[4851]: E0223 14:05:29.228917 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c\": container with ID starting with 13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c not found: ID does not exist" containerID="13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.228962 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c"} err="failed to get container status \"13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c\": rpc error: code = NotFound desc = could not find container \"13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c\": container with ID starting with 13b77b8b7e91f4fc0c4672ff9f653365cbc34ebf3aa4e44ea75cd25b0b98471c not found: ID does not exist" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.228991 4851 scope.go:117] "RemoveContainer" containerID="7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306" Feb 23 14:05:29 crc kubenswrapper[4851]: E0223 14:05:29.229430 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306\": container with ID starting with 7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306 not found: ID does not exist" containerID="7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.229475 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306"} err="failed to get container status \"7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306\": rpc error: code = NotFound desc = could not find container \"7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306\": container with ID starting with 7f3ef6c2554aa0241359e23a39214b9ef61ebd574f93188e5ca0cca46d240306 not found: ID does not exist" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.229505 4851 scope.go:117] "RemoveContainer" containerID="e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3" Feb 23 14:05:29 crc kubenswrapper[4851]: E0223 14:05:29.229718 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3\": container with ID starting with e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3 not found: ID does not exist" containerID="e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.229747 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3"} err="failed to get container status \"e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3\": rpc error: code = NotFound desc = could not find container \"e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3\": container with ID starting with e95940ef475fdc1bc503a4deeb5df6c8b6f6b78a8a101924b8ac409d4c71dac3 not found: ID does not exist" Feb 23 14:05:29 crc kubenswrapper[4851]: I0223 14:05:29.978243 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a913a4-7201-47a3-ba19-eafbda8cafbb" path="/var/lib/kubelet/pods/50a913a4-7201-47a3-ba19-eafbda8cafbb/volumes" Feb 23 14:05:32 crc kubenswrapper[4851]: I0223 14:05:32.969234 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:05:32 crc kubenswrapper[4851]: E0223 14:05:32.969965 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:05:43 crc kubenswrapper[4851]: I0223 14:05:43.968971 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:05:43 crc kubenswrapper[4851]: E0223 14:05:43.969673 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:05:55 crc kubenswrapper[4851]: I0223 14:05:55.970933 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:05:55 crc kubenswrapper[4851]: E0223 14:05:55.971900 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:06:10 crc kubenswrapper[4851]: I0223 14:06:10.969016 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:06:10 crc kubenswrapper[4851]: E0223 14:06:10.969674 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:06:23 crc kubenswrapper[4851]: I0223 14:06:23.969105 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:06:24 crc kubenswrapper[4851]: I0223 14:06:24.677944 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"d2608ceb581f02745decb9aaeafec5770a4f1df96a11f0114b46173bf46dba1a"} Feb 23 14:06:56 crc kubenswrapper[4851]: I0223 14:06:56.022430 4851 generic.go:334] "Generic (PLEG): container finished" podID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" containerID="e8d1f24f6eac87bab66646102acc5489bc18efe85d67f25b377617e8107bb945" exitCode=0 Feb 23 14:06:56 crc kubenswrapper[4851]: I0223 14:06:56.022501 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" event={"ID":"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c","Type":"ContainerDied","Data":"e8d1f24f6eac87bab66646102acc5489bc18efe85d67f25b377617e8107bb945"} Feb 23 14:06:56 crc kubenswrapper[4851]: I0223 14:06:56.023805 4851 scope.go:117] "RemoveContainer" containerID="e8d1f24f6eac87bab66646102acc5489bc18efe85d67f25b377617e8107bb945" Feb 23 14:06:56 crc kubenswrapper[4851]: I0223 14:06:56.472045 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kjx6d_must-gather-cc4zr_064ec4aa-abbc-4ff6-9550-eda3ba5ed23c/gather/0.log" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.113010 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vxnkx"] Feb 23 14:07:03 crc kubenswrapper[4851]: E0223 14:07:03.113618 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerName="extract-content" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.113630 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerName="extract-content" Feb 23 14:07:03 crc kubenswrapper[4851]: E0223 14:07:03.113650 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerName="extract-utilities" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.113656 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerName="extract-utilities" Feb 23 14:07:03 crc kubenswrapper[4851]: E0223 14:07:03.113678 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerName="registry-server" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.113684 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerName="registry-server" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.113859 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a913a4-7201-47a3-ba19-eafbda8cafbb" containerName="registry-server" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.115226 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.129237 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxnkx"] Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.152414 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spbrc\" (UniqueName: \"kubernetes.io/projected/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-kube-api-access-spbrc\") pod \"redhat-marketplace-vxnkx\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.152518 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-catalog-content\") pod \"redhat-marketplace-vxnkx\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.152575 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-utilities\") pod \"redhat-marketplace-vxnkx\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.253922 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-utilities\") pod \"redhat-marketplace-vxnkx\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.254018 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spbrc\" (UniqueName: \"kubernetes.io/projected/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-kube-api-access-spbrc\") pod \"redhat-marketplace-vxnkx\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.254090 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-catalog-content\") pod \"redhat-marketplace-vxnkx\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.254551 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-utilities\") pod \"redhat-marketplace-vxnkx\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.254559 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-catalog-content\") pod \"redhat-marketplace-vxnkx\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.301214 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spbrc\" (UniqueName: \"kubernetes.io/projected/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-kube-api-access-spbrc\") pod \"redhat-marketplace-vxnkx\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.434951 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.827128 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kjx6d/must-gather-cc4zr"] Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.827418 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" podUID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" containerName="copy" containerID="cri-o://ddcae191c926ed8a40ccd0e061e35c51dd83d35aad190c3b1dc1d04c47418fa5" gracePeriod=2 Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.876553 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kjx6d/must-gather-cc4zr"] Feb 23 14:07:03 crc kubenswrapper[4851]: I0223 14:07:03.923819 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxnkx"] Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.111750 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kjx6d_must-gather-cc4zr_064ec4aa-abbc-4ff6-9550-eda3ba5ed23c/copy/0.log" Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.112132 4851 generic.go:334] "Generic (PLEG): container finished" podID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" containerID="ddcae191c926ed8a40ccd0e061e35c51dd83d35aad190c3b1dc1d04c47418fa5" exitCode=143 Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.114110 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxnkx" event={"ID":"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3","Type":"ContainerStarted","Data":"dfadc873e69c66a1b7441fecad8e861e041780b5958a715a4f2acd75bbcf1eb3"} Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.263929 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kjx6d_must-gather-cc4zr_064ec4aa-abbc-4ff6-9550-eda3ba5ed23c/copy/0.log" Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.264680 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.387100 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jrs2\" (UniqueName: \"kubernetes.io/projected/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-kube-api-access-6jrs2\") pod \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\" (UID: \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\") " Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.387401 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-must-gather-output\") pod \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\" (UID: \"064ec4aa-abbc-4ff6-9550-eda3ba5ed23c\") " Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.398628 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-kube-api-access-6jrs2" (OuterVolumeSpecName: "kube-api-access-6jrs2") pod "064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" (UID: "064ec4aa-abbc-4ff6-9550-eda3ba5ed23c"). InnerVolumeSpecName "kube-api-access-6jrs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.489667 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jrs2\" (UniqueName: \"kubernetes.io/projected/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-kube-api-access-6jrs2\") on node \"crc\" DevicePath \"\"" Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.533708 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" (UID: "064ec4aa-abbc-4ff6-9550-eda3ba5ed23c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:07:04 crc kubenswrapper[4851]: I0223 14:07:04.592011 4851 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 14:07:05 crc kubenswrapper[4851]: I0223 14:07:05.123596 4851 generic.go:334] "Generic (PLEG): container finished" podID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerID="153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d" exitCode=0 Feb 23 14:07:05 crc kubenswrapper[4851]: I0223 14:07:05.123803 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxnkx" event={"ID":"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3","Type":"ContainerDied","Data":"153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d"} Feb 23 14:07:05 crc kubenswrapper[4851]: I0223 14:07:05.126277 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kjx6d_must-gather-cc4zr_064ec4aa-abbc-4ff6-9550-eda3ba5ed23c/copy/0.log" Feb 23 14:07:05 crc kubenswrapper[4851]: I0223 14:07:05.126302 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 14:07:05 crc kubenswrapper[4851]: I0223 14:07:05.126583 4851 scope.go:117] "RemoveContainer" containerID="ddcae191c926ed8a40ccd0e061e35c51dd83d35aad190c3b1dc1d04c47418fa5" Feb 23 14:07:05 crc kubenswrapper[4851]: I0223 14:07:05.126778 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kjx6d/must-gather-cc4zr" Feb 23 14:07:05 crc kubenswrapper[4851]: I0223 14:07:05.155214 4851 scope.go:117] "RemoveContainer" containerID="e8d1f24f6eac87bab66646102acc5489bc18efe85d67f25b377617e8107bb945" Feb 23 14:07:05 crc kubenswrapper[4851]: I0223 14:07:05.983449 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" path="/var/lib/kubelet/pods/064ec4aa-abbc-4ff6-9550-eda3ba5ed23c/volumes" Feb 23 14:07:06 crc kubenswrapper[4851]: I0223 14:07:06.141034 4851 generic.go:334] "Generic (PLEG): container finished" podID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerID="5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8" exitCode=0 Feb 23 14:07:06 crc kubenswrapper[4851]: I0223 14:07:06.141109 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxnkx" event={"ID":"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3","Type":"ContainerDied","Data":"5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8"} Feb 23 14:07:07 crc kubenswrapper[4851]: I0223 14:07:07.152493 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxnkx" event={"ID":"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3","Type":"ContainerStarted","Data":"bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f"} Feb 23 14:07:07 crc kubenswrapper[4851]: I0223 14:07:07.171484 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vxnkx" podStartSLOduration=2.7169158429999998 podStartE2EDuration="4.171464415s" podCreationTimestamp="2026-02-23 14:07:03 +0000 UTC" firstStartedPulling="2026-02-23 14:07:05.125972099 +0000 UTC m=+3579.807675777" lastFinishedPulling="2026-02-23 14:07:06.580520671 +0000 UTC m=+3581.262224349" observedRunningTime="2026-02-23 14:07:07.167228455 +0000 UTC m=+3581.848932133" watchObservedRunningTime="2026-02-23 14:07:07.171464415 +0000 UTC m=+3581.853168093" Feb 23 14:07:13 crc kubenswrapper[4851]: I0223 14:07:13.435724 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:13 crc kubenswrapper[4851]: I0223 14:07:13.436254 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:13 crc kubenswrapper[4851]: I0223 14:07:13.487474 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:14 crc kubenswrapper[4851]: I0223 14:07:14.258413 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:14 crc kubenswrapper[4851]: I0223 14:07:14.297940 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxnkx"] Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.231162 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vxnkx" podUID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerName="registry-server" containerID="cri-o://bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f" gracePeriod=2 Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.706799 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.819269 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-catalog-content\") pod \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.819423 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spbrc\" (UniqueName: \"kubernetes.io/projected/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-kube-api-access-spbrc\") pod \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.820921 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-utilities\") pod \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\" (UID: \"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3\") " Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.821705 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-utilities" (OuterVolumeSpecName: "utilities") pod "7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" (UID: "7a13e0be-c3fa-431a-8f66-a6a3b7963bb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.822064 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.826220 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-kube-api-access-spbrc" (OuterVolumeSpecName: "kube-api-access-spbrc") pod "7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" (UID: "7a13e0be-c3fa-431a-8f66-a6a3b7963bb3"). InnerVolumeSpecName "kube-api-access-spbrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.849687 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" (UID: "7a13e0be-c3fa-431a-8f66-a6a3b7963bb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.924166 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 14:07:16 crc kubenswrapper[4851]: I0223 14:07:16.924202 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spbrc\" (UniqueName: \"kubernetes.io/projected/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3-kube-api-access-spbrc\") on node \"crc\" DevicePath \"\"" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.242798 4851 generic.go:334] "Generic (PLEG): container finished" podID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerID="bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f" exitCode=0 Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.242851 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxnkx" event={"ID":"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3","Type":"ContainerDied","Data":"bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f"} Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.242938 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vxnkx" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.243280 4851 scope.go:117] "RemoveContainer" containerID="bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.243154 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vxnkx" event={"ID":"7a13e0be-c3fa-431a-8f66-a6a3b7963bb3","Type":"ContainerDied","Data":"dfadc873e69c66a1b7441fecad8e861e041780b5958a715a4f2acd75bbcf1eb3"} Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.263332 4851 scope.go:117] "RemoveContainer" containerID="5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.280466 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxnkx"] Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.290013 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vxnkx"] Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.303434 4851 scope.go:117] "RemoveContainer" containerID="153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.334938 4851 scope.go:117] "RemoveContainer" containerID="bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f" Feb 23 14:07:17 crc kubenswrapper[4851]: E0223 14:07:17.335400 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f\": container with ID starting with bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f not found: ID does not exist" containerID="bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.335441 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f"} err="failed to get container status \"bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f\": rpc error: code = NotFound desc = could not find container \"bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f\": container with ID starting with bf7d1977e895f7e8cb6cfe88da4d1807c59254c7f9d06e9a88800f522129773f not found: ID does not exist" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.335461 4851 scope.go:117] "RemoveContainer" containerID="5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8" Feb 23 14:07:17 crc kubenswrapper[4851]: E0223 14:07:17.336854 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8\": container with ID starting with 5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8 not found: ID does not exist" containerID="5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.336890 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8"} err="failed to get container status \"5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8\": rpc error: code = NotFound desc = could not find container \"5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8\": container with ID starting with 5dc06b92ab15e08e09b4137ea0ccdeeded80b8fc44a766c5776b61e53db8cfd8 not found: ID does not exist" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.336911 4851 scope.go:117] "RemoveContainer" containerID="153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d" Feb 23 14:07:17 crc kubenswrapper[4851]: E0223 14:07:17.337208 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d\": container with ID starting with 153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d not found: ID does not exist" containerID="153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.337405 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d"} err="failed to get container status \"153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d\": rpc error: code = NotFound desc = could not find container \"153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d\": container with ID starting with 153c570e8cce15e7e48caa8aacf972ee37f7102aefbb6f090b0ef7a6ccf4aa8d not found: ID does not exist" Feb 23 14:07:17 crc kubenswrapper[4851]: I0223 14:07:17.983173 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" path="/var/lib/kubelet/pods/7a13e0be-c3fa-431a-8f66-a6a3b7963bb3/volumes" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.534282 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2bbnj"] Feb 23 14:07:38 crc kubenswrapper[4851]: E0223 14:07:38.535479 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerName="extract-content" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.535497 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerName="extract-content" Feb 23 14:07:38 crc kubenswrapper[4851]: E0223 14:07:38.535523 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerName="registry-server" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.535531 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerName="registry-server" Feb 23 14:07:38 crc kubenswrapper[4851]: E0223 14:07:38.535550 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" containerName="gather" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.535560 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" containerName="gather" Feb 23 14:07:38 crc kubenswrapper[4851]: E0223 14:07:38.535575 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" containerName="copy" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.535583 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" containerName="copy" Feb 23 14:07:38 crc kubenswrapper[4851]: E0223 14:07:38.535600 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerName="extract-utilities" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.535608 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerName="extract-utilities" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.535867 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" containerName="copy" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.535898 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="064ec4aa-abbc-4ff6-9550-eda3ba5ed23c" containerName="gather" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.535911 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a13e0be-c3fa-431a-8f66-a6a3b7963bb3" containerName="registry-server" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.537742 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.544716 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2bbnj"] Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.627602 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k8v5\" (UniqueName: \"kubernetes.io/projected/ca26bfa6-cf59-44d5-b371-b7489d9c4448-kube-api-access-7k8v5\") pod \"redhat-operators-2bbnj\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.627971 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-catalog-content\") pod \"redhat-operators-2bbnj\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.628033 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-utilities\") pod \"redhat-operators-2bbnj\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.729903 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-catalog-content\") pod \"redhat-operators-2bbnj\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.730023 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-utilities\") pod \"redhat-operators-2bbnj\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.730099 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k8v5\" (UniqueName: \"kubernetes.io/projected/ca26bfa6-cf59-44d5-b371-b7489d9c4448-kube-api-access-7k8v5\") pod \"redhat-operators-2bbnj\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.730476 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-catalog-content\") pod \"redhat-operators-2bbnj\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.730737 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-utilities\") pod \"redhat-operators-2bbnj\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.751097 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k8v5\" (UniqueName: \"kubernetes.io/projected/ca26bfa6-cf59-44d5-b371-b7489d9c4448-kube-api-access-7k8v5\") pod \"redhat-operators-2bbnj\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:38 crc kubenswrapper[4851]: I0223 14:07:38.868163 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:39 crc kubenswrapper[4851]: I0223 14:07:39.373198 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2bbnj"] Feb 23 14:07:39 crc kubenswrapper[4851]: W0223 14:07:39.374709 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca26bfa6_cf59_44d5_b371_b7489d9c4448.slice/crio-de60e0cd43f03f9dcbcbc9231f90353eca278d51a80849a9483724f5bf56b99d WatchSource:0}: Error finding container de60e0cd43f03f9dcbcbc9231f90353eca278d51a80849a9483724f5bf56b99d: Status 404 returned error can't find the container with id de60e0cd43f03f9dcbcbc9231f90353eca278d51a80849a9483724f5bf56b99d Feb 23 14:07:39 crc kubenswrapper[4851]: I0223 14:07:39.456411 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bbnj" event={"ID":"ca26bfa6-cf59-44d5-b371-b7489d9c4448","Type":"ContainerStarted","Data":"de60e0cd43f03f9dcbcbc9231f90353eca278d51a80849a9483724f5bf56b99d"} Feb 23 14:07:40 crc kubenswrapper[4851]: I0223 14:07:40.467226 4851 generic.go:334] "Generic (PLEG): container finished" podID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerID="c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57" exitCode=0 Feb 23 14:07:40 crc kubenswrapper[4851]: I0223 14:07:40.467274 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bbnj" event={"ID":"ca26bfa6-cf59-44d5-b371-b7489d9c4448","Type":"ContainerDied","Data":"c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57"} Feb 23 14:07:41 crc kubenswrapper[4851]: I0223 14:07:41.479034 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bbnj" event={"ID":"ca26bfa6-cf59-44d5-b371-b7489d9c4448","Type":"ContainerStarted","Data":"65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5"} Feb 23 14:07:42 crc kubenswrapper[4851]: I0223 14:07:42.490768 4851 generic.go:334] "Generic (PLEG): container finished" podID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerID="65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5" exitCode=0 Feb 23 14:07:42 crc kubenswrapper[4851]: I0223 14:07:42.490825 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bbnj" event={"ID":"ca26bfa6-cf59-44d5-b371-b7489d9c4448","Type":"ContainerDied","Data":"65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5"} Feb 23 14:07:43 crc kubenswrapper[4851]: I0223 14:07:43.502154 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bbnj" event={"ID":"ca26bfa6-cf59-44d5-b371-b7489d9c4448","Type":"ContainerStarted","Data":"6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0"} Feb 23 14:07:43 crc kubenswrapper[4851]: I0223 14:07:43.526391 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2bbnj" podStartSLOduration=3.134417199 podStartE2EDuration="5.526371974s" podCreationTimestamp="2026-02-23 14:07:38 +0000 UTC" firstStartedPulling="2026-02-23 14:07:40.469546744 +0000 UTC m=+3615.151250422" lastFinishedPulling="2026-02-23 14:07:42.861501529 +0000 UTC m=+3617.543205197" observedRunningTime="2026-02-23 14:07:43.519398837 +0000 UTC m=+3618.201102525" watchObservedRunningTime="2026-02-23 14:07:43.526371974 +0000 UTC m=+3618.208075652" Feb 23 14:07:48 crc kubenswrapper[4851]: I0223 14:07:48.869419 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:48 crc kubenswrapper[4851]: I0223 14:07:48.869940 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:48 crc kubenswrapper[4851]: I0223 14:07:48.917593 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:49 crc kubenswrapper[4851]: I0223 14:07:49.598718 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:49 crc kubenswrapper[4851]: I0223 14:07:49.643341 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2bbnj"] Feb 23 14:07:51 crc kubenswrapper[4851]: I0223 14:07:51.581028 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2bbnj" podUID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerName="registry-server" containerID="cri-o://6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0" gracePeriod=2 Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.020638 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.124991 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-utilities\") pod \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.125093 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k8v5\" (UniqueName: \"kubernetes.io/projected/ca26bfa6-cf59-44d5-b371-b7489d9c4448-kube-api-access-7k8v5\") pod \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.125126 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-catalog-content\") pod \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\" (UID: \"ca26bfa6-cf59-44d5-b371-b7489d9c4448\") " Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.126033 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-utilities" (OuterVolumeSpecName: "utilities") pod "ca26bfa6-cf59-44d5-b371-b7489d9c4448" (UID: "ca26bfa6-cf59-44d5-b371-b7489d9c4448"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.131284 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca26bfa6-cf59-44d5-b371-b7489d9c4448-kube-api-access-7k8v5" (OuterVolumeSpecName: "kube-api-access-7k8v5") pod "ca26bfa6-cf59-44d5-b371-b7489d9c4448" (UID: "ca26bfa6-cf59-44d5-b371-b7489d9c4448"). InnerVolumeSpecName "kube-api-access-7k8v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.227520 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.227576 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k8v5\" (UniqueName: \"kubernetes.io/projected/ca26bfa6-cf59-44d5-b371-b7489d9c4448-kube-api-access-7k8v5\") on node \"crc\" DevicePath \"\"" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.594475 4851 generic.go:334] "Generic (PLEG): container finished" podID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerID="6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0" exitCode=0 Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.594535 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bbnj" event={"ID":"ca26bfa6-cf59-44d5-b371-b7489d9c4448","Type":"ContainerDied","Data":"6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0"} Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.594571 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bbnj" event={"ID":"ca26bfa6-cf59-44d5-b371-b7489d9c4448","Type":"ContainerDied","Data":"de60e0cd43f03f9dcbcbc9231f90353eca278d51a80849a9483724f5bf56b99d"} Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.594592 4851 scope.go:117] "RemoveContainer" containerID="6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.594538 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bbnj" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.618259 4851 scope.go:117] "RemoveContainer" containerID="65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.649122 4851 scope.go:117] "RemoveContainer" containerID="c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.710078 4851 scope.go:117] "RemoveContainer" containerID="6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0" Feb 23 14:07:52 crc kubenswrapper[4851]: E0223 14:07:52.710553 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0\": container with ID starting with 6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0 not found: ID does not exist" containerID="6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.710621 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0"} err="failed to get container status \"6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0\": rpc error: code = NotFound desc = could not find container \"6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0\": container with ID starting with 6a9b061ceb031017c7105434a348977a9ced7e8f7efda9740c43653da0eed0e0 not found: ID does not exist" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.710662 4851 scope.go:117] "RemoveContainer" containerID="65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5" Feb 23 14:07:52 crc kubenswrapper[4851]: E0223 14:07:52.711024 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5\": container with ID starting with 65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5 not found: ID does not exist" containerID="65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.711067 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5"} err="failed to get container status \"65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5\": rpc error: code = NotFound desc = could not find container \"65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5\": container with ID starting with 65893905ce233ff647a115492e753c3ac94d4431ee58fca429309e79843349f5 not found: ID does not exist" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.711099 4851 scope.go:117] "RemoveContainer" containerID="c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57" Feb 23 14:07:52 crc kubenswrapper[4851]: E0223 14:07:52.711313 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57\": container with ID starting with c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57 not found: ID does not exist" containerID="c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57" Feb 23 14:07:52 crc kubenswrapper[4851]: I0223 14:07:52.711377 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57"} err="failed to get container status \"c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57\": rpc error: code = NotFound desc = could not find container \"c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57\": container with ID starting with c2dc65ed0ded3e590909236009935b7d3121baa226e72bba24f728f4bfe1fa57 not found: ID does not exist" Feb 23 14:07:53 crc kubenswrapper[4851]: I0223 14:07:53.820472 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca26bfa6-cf59-44d5-b371-b7489d9c4448" (UID: "ca26bfa6-cf59-44d5-b371-b7489d9c4448"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:07:53 crc kubenswrapper[4851]: I0223 14:07:53.865368 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca26bfa6-cf59-44d5-b371-b7489d9c4448-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 14:07:54 crc kubenswrapper[4851]: I0223 14:07:54.131582 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2bbnj"] Feb 23 14:07:54 crc kubenswrapper[4851]: I0223 14:07:54.145501 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2bbnj"] Feb 23 14:07:55 crc kubenswrapper[4851]: I0223 14:07:55.982086 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" path="/var/lib/kubelet/pods/ca26bfa6-cf59-44d5-b371-b7489d9c4448/volumes" Feb 23 14:08:41 crc kubenswrapper[4851]: I0223 14:08:41.925270 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:08:41 crc kubenswrapper[4851]: I0223 14:08:41.925727 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 14:09:11 crc kubenswrapper[4851]: I0223 14:09:11.925404 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:09:11 crc kubenswrapper[4851]: I0223 14:09:11.925885 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 14:09:41 crc kubenswrapper[4851]: I0223 14:09:41.925201 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:09:41 crc kubenswrapper[4851]: I0223 14:09:41.927297 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 14:09:41 crc kubenswrapper[4851]: I0223 14:09:41.927664 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 14:09:41 crc kubenswrapper[4851]: I0223 14:09:41.928536 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2608ceb581f02745decb9aaeafec5770a4f1df96a11f0114b46173bf46dba1a"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 14:09:41 crc kubenswrapper[4851]: I0223 14:09:41.928695 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://d2608ceb581f02745decb9aaeafec5770a4f1df96a11f0114b46173bf46dba1a" gracePeriod=600 Feb 23 14:09:42 crc kubenswrapper[4851]: I0223 14:09:42.657205 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="d2608ceb581f02745decb9aaeafec5770a4f1df96a11f0114b46173bf46dba1a" exitCode=0 Feb 23 14:09:42 crc kubenswrapper[4851]: I0223 14:09:42.657250 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"d2608ceb581f02745decb9aaeafec5770a4f1df96a11f0114b46173bf46dba1a"} Feb 23 14:09:42 crc kubenswrapper[4851]: I0223 14:09:42.658174 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b"} Feb 23 14:09:42 crc kubenswrapper[4851]: I0223 14:09:42.658215 4851 scope.go:117] "RemoveContainer" containerID="6e64b293b638a63cd186fd20991563cb57cdd4cec8f616fc8733902bbb96b2ab" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.624554 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5tk5n/must-gather-gwkqn"] Feb 23 14:10:01 crc kubenswrapper[4851]: E0223 14:10:01.626256 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerName="extract-utilities" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.626277 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerName="extract-utilities" Feb 23 14:10:01 crc kubenswrapper[4851]: E0223 14:10:01.626320 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerName="registry-server" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.626344 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerName="registry-server" Feb 23 14:10:01 crc kubenswrapper[4851]: E0223 14:10:01.626354 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerName="extract-content" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.626362 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerName="extract-content" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.626579 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca26bfa6-cf59-44d5-b371-b7489d9c4448" containerName="registry-server" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.628436 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.631308 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5tk5n"/"openshift-service-ca.crt" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.631470 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5tk5n"/"default-dockercfg-mc7l4" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.631403 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5tk5n"/"kube-root-ca.crt" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.637796 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5tk5n/must-gather-gwkqn"] Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.773203 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvscn\" (UniqueName: \"kubernetes.io/projected/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-kube-api-access-fvscn\") pod \"must-gather-gwkqn\" (UID: \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\") " pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.773671 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-must-gather-output\") pod \"must-gather-gwkqn\" (UID: \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\") " pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.875189 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvscn\" (UniqueName: \"kubernetes.io/projected/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-kube-api-access-fvscn\") pod \"must-gather-gwkqn\" (UID: \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\") " pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.875309 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-must-gather-output\") pod \"must-gather-gwkqn\" (UID: \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\") " pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.875842 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-must-gather-output\") pod \"must-gather-gwkqn\" (UID: \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\") " pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.897603 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvscn\" (UniqueName: \"kubernetes.io/projected/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-kube-api-access-fvscn\") pod \"must-gather-gwkqn\" (UID: \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\") " pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:10:01 crc kubenswrapper[4851]: I0223 14:10:01.955745 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:10:02 crc kubenswrapper[4851]: I0223 14:10:02.537811 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5tk5n/must-gather-gwkqn"] Feb 23 14:10:02 crc kubenswrapper[4851]: I0223 14:10:02.848905 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" event={"ID":"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59","Type":"ContainerStarted","Data":"39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1"} Feb 23 14:10:02 crc kubenswrapper[4851]: I0223 14:10:02.849288 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" event={"ID":"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59","Type":"ContainerStarted","Data":"ad67bdc0a26024d00c3223007737574e7527785939448fb88413ccfacdef5ee6"} Feb 23 14:10:03 crc kubenswrapper[4851]: I0223 14:10:03.859034 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" event={"ID":"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59","Type":"ContainerStarted","Data":"f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675"} Feb 23 14:10:03 crc kubenswrapper[4851]: I0223 14:10:03.874521 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" podStartSLOduration=2.874503488 podStartE2EDuration="2.874503488s" podCreationTimestamp="2026-02-23 14:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 14:10:03.871875373 +0000 UTC m=+3758.553579051" watchObservedRunningTime="2026-02-23 14:10:03.874503488 +0000 UTC m=+3758.556207166" Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.466432 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5tk5n/crc-debug-t8rr5"] Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.469109 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.571805 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5426465-367b-4548-85ff-e511e50b253c-host\") pod \"crc-debug-t8rr5\" (UID: \"d5426465-367b-4548-85ff-e511e50b253c\") " pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.571878 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp466\" (UniqueName: \"kubernetes.io/projected/d5426465-367b-4548-85ff-e511e50b253c-kube-api-access-gp466\") pod \"crc-debug-t8rr5\" (UID: \"d5426465-367b-4548-85ff-e511e50b253c\") " pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.673469 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5426465-367b-4548-85ff-e511e50b253c-host\") pod \"crc-debug-t8rr5\" (UID: \"d5426465-367b-4548-85ff-e511e50b253c\") " pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.673524 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp466\" (UniqueName: \"kubernetes.io/projected/d5426465-367b-4548-85ff-e511e50b253c-kube-api-access-gp466\") pod \"crc-debug-t8rr5\" (UID: \"d5426465-367b-4548-85ff-e511e50b253c\") " pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.673934 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5426465-367b-4548-85ff-e511e50b253c-host\") pod \"crc-debug-t8rr5\" (UID: \"d5426465-367b-4548-85ff-e511e50b253c\") " pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.704638 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp466\" (UniqueName: \"kubernetes.io/projected/d5426465-367b-4548-85ff-e511e50b253c-kube-api-access-gp466\") pod \"crc-debug-t8rr5\" (UID: \"d5426465-367b-4548-85ff-e511e50b253c\") " pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.793625 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:06 crc kubenswrapper[4851]: I0223 14:10:06.910081 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" event={"ID":"d5426465-367b-4548-85ff-e511e50b253c","Type":"ContainerStarted","Data":"34cd82135fe5c51d1eca07658e1b3ba3c1c09bb319b5c5c96c0e184ebc48ebe2"} Feb 23 14:10:07 crc kubenswrapper[4851]: I0223 14:10:07.939403 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" event={"ID":"d5426465-367b-4548-85ff-e511e50b253c","Type":"ContainerStarted","Data":"75c50723e558b0b1b72288ee03e98ea511fa9c5bca153a58c33ef42ebb442314"} Feb 23 14:10:07 crc kubenswrapper[4851]: I0223 14:10:07.978066 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" podStartSLOduration=1.978045963 podStartE2EDuration="1.978045963s" podCreationTimestamp="2026-02-23 14:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 14:10:07.957856162 +0000 UTC m=+3762.639559860" watchObservedRunningTime="2026-02-23 14:10:07.978045963 +0000 UTC m=+3762.659749641" Feb 23 14:10:44 crc kubenswrapper[4851]: I0223 14:10:44.244740 4851 generic.go:334] "Generic (PLEG): container finished" podID="d5426465-367b-4548-85ff-e511e50b253c" containerID="75c50723e558b0b1b72288ee03e98ea511fa9c5bca153a58c33ef42ebb442314" exitCode=0 Feb 23 14:10:44 crc kubenswrapper[4851]: I0223 14:10:44.244925 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" event={"ID":"d5426465-367b-4548-85ff-e511e50b253c","Type":"ContainerDied","Data":"75c50723e558b0b1b72288ee03e98ea511fa9c5bca153a58c33ef42ebb442314"} Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.352496 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.387231 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5tk5n/crc-debug-t8rr5"] Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.399086 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5tk5n/crc-debug-t8rr5"] Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.506586 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp466\" (UniqueName: \"kubernetes.io/projected/d5426465-367b-4548-85ff-e511e50b253c-kube-api-access-gp466\") pod \"d5426465-367b-4548-85ff-e511e50b253c\" (UID: \"d5426465-367b-4548-85ff-e511e50b253c\") " Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.506855 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5426465-367b-4548-85ff-e511e50b253c-host\") pod \"d5426465-367b-4548-85ff-e511e50b253c\" (UID: \"d5426465-367b-4548-85ff-e511e50b253c\") " Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.507101 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5426465-367b-4548-85ff-e511e50b253c-host" (OuterVolumeSpecName: "host") pod "d5426465-367b-4548-85ff-e511e50b253c" (UID: "d5426465-367b-4548-85ff-e511e50b253c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.507239 4851 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d5426465-367b-4548-85ff-e511e50b253c-host\") on node \"crc\" DevicePath \"\"" Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.516903 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5426465-367b-4548-85ff-e511e50b253c-kube-api-access-gp466" (OuterVolumeSpecName: "kube-api-access-gp466") pod "d5426465-367b-4548-85ff-e511e50b253c" (UID: "d5426465-367b-4548-85ff-e511e50b253c"). InnerVolumeSpecName "kube-api-access-gp466". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.608571 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp466\" (UniqueName: \"kubernetes.io/projected/d5426465-367b-4548-85ff-e511e50b253c-kube-api-access-gp466\") on node \"crc\" DevicePath \"\"" Feb 23 14:10:45 crc kubenswrapper[4851]: I0223 14:10:45.979309 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5426465-367b-4548-85ff-e511e50b253c" path="/var/lib/kubelet/pods/d5426465-367b-4548-85ff-e511e50b253c/volumes" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.261530 4851 scope.go:117] "RemoveContainer" containerID="75c50723e558b0b1b72288ee03e98ea511fa9c5bca153a58c33ef42ebb442314" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.261659 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-t8rr5" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.575957 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5tk5n/crc-debug-cns4d"] Feb 23 14:10:46 crc kubenswrapper[4851]: E0223 14:10:46.576611 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5426465-367b-4548-85ff-e511e50b253c" containerName="container-00" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.576623 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5426465-367b-4548-85ff-e511e50b253c" containerName="container-00" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.586146 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5426465-367b-4548-85ff-e511e50b253c" containerName="container-00" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.587260 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.729762 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51fd50a6-ac00-48ff-962a-84e801224c78-host\") pod \"crc-debug-cns4d\" (UID: \"51fd50a6-ac00-48ff-962a-84e801224c78\") " pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.730036 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkfbk\" (UniqueName: \"kubernetes.io/projected/51fd50a6-ac00-48ff-962a-84e801224c78-kube-api-access-qkfbk\") pod \"crc-debug-cns4d\" (UID: \"51fd50a6-ac00-48ff-962a-84e801224c78\") " pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.831825 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51fd50a6-ac00-48ff-962a-84e801224c78-host\") pod \"crc-debug-cns4d\" (UID: \"51fd50a6-ac00-48ff-962a-84e801224c78\") " pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.831920 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkfbk\" (UniqueName: \"kubernetes.io/projected/51fd50a6-ac00-48ff-962a-84e801224c78-kube-api-access-qkfbk\") pod \"crc-debug-cns4d\" (UID: \"51fd50a6-ac00-48ff-962a-84e801224c78\") " pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.832110 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51fd50a6-ac00-48ff-962a-84e801224c78-host\") pod \"crc-debug-cns4d\" (UID: \"51fd50a6-ac00-48ff-962a-84e801224c78\") " pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.850027 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkfbk\" (UniqueName: \"kubernetes.io/projected/51fd50a6-ac00-48ff-962a-84e801224c78-kube-api-access-qkfbk\") pod \"crc-debug-cns4d\" (UID: \"51fd50a6-ac00-48ff-962a-84e801224c78\") " pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:46 crc kubenswrapper[4851]: I0223 14:10:46.907108 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:47 crc kubenswrapper[4851]: I0223 14:10:47.270345 4851 generic.go:334] "Generic (PLEG): container finished" podID="51fd50a6-ac00-48ff-962a-84e801224c78" containerID="df5e34e528948b241814c7a4b9df55bc89ab5d4d237c28ac71c7401cb69bae8e" exitCode=0 Feb 23 14:10:47 crc kubenswrapper[4851]: I0223 14:10:47.270474 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/crc-debug-cns4d" event={"ID":"51fd50a6-ac00-48ff-962a-84e801224c78","Type":"ContainerDied","Data":"df5e34e528948b241814c7a4b9df55bc89ab5d4d237c28ac71c7401cb69bae8e"} Feb 23 14:10:47 crc kubenswrapper[4851]: I0223 14:10:47.271016 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/crc-debug-cns4d" event={"ID":"51fd50a6-ac00-48ff-962a-84e801224c78","Type":"ContainerStarted","Data":"17a98a84ca6c91e579ccb3475d415847d72c4868f0d664a97e3bb8bf8c5c92eb"} Feb 23 14:10:47 crc kubenswrapper[4851]: I0223 14:10:47.693794 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5tk5n/crc-debug-cns4d"] Feb 23 14:10:47 crc kubenswrapper[4851]: I0223 14:10:47.701605 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5tk5n/crc-debug-cns4d"] Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.381998 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.560245 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51fd50a6-ac00-48ff-962a-84e801224c78-host\") pod \"51fd50a6-ac00-48ff-962a-84e801224c78\" (UID: \"51fd50a6-ac00-48ff-962a-84e801224c78\") " Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.560425 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkfbk\" (UniqueName: \"kubernetes.io/projected/51fd50a6-ac00-48ff-962a-84e801224c78-kube-api-access-qkfbk\") pod \"51fd50a6-ac00-48ff-962a-84e801224c78\" (UID: \"51fd50a6-ac00-48ff-962a-84e801224c78\") " Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.560434 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51fd50a6-ac00-48ff-962a-84e801224c78-host" (OuterVolumeSpecName: "host") pod "51fd50a6-ac00-48ff-962a-84e801224c78" (UID: "51fd50a6-ac00-48ff-962a-84e801224c78"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.560849 4851 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/51fd50a6-ac00-48ff-962a-84e801224c78-host\") on node \"crc\" DevicePath \"\"" Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.565658 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51fd50a6-ac00-48ff-962a-84e801224c78-kube-api-access-qkfbk" (OuterVolumeSpecName: "kube-api-access-qkfbk") pod "51fd50a6-ac00-48ff-962a-84e801224c78" (UID: "51fd50a6-ac00-48ff-962a-84e801224c78"). InnerVolumeSpecName "kube-api-access-qkfbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.662977 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkfbk\" (UniqueName: \"kubernetes.io/projected/51fd50a6-ac00-48ff-962a-84e801224c78-kube-api-access-qkfbk\") on node \"crc\" DevicePath \"\"" Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.898779 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5tk5n/crc-debug-2kwfs"] Feb 23 14:10:48 crc kubenswrapper[4851]: E0223 14:10:48.899271 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51fd50a6-ac00-48ff-962a-84e801224c78" containerName="container-00" Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.899287 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fd50a6-ac00-48ff-962a-84e801224c78" containerName="container-00" Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.899564 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="51fd50a6-ac00-48ff-962a-84e801224c78" containerName="container-00" Feb 23 14:10:48 crc kubenswrapper[4851]: I0223 14:10:48.900317 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.070034 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2jj\" (UniqueName: \"kubernetes.io/projected/a4377290-4b37-45fa-b46d-6332f5b02b05-kube-api-access-9x2jj\") pod \"crc-debug-2kwfs\" (UID: \"a4377290-4b37-45fa-b46d-6332f5b02b05\") " pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.070378 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4377290-4b37-45fa-b46d-6332f5b02b05-host\") pod \"crc-debug-2kwfs\" (UID: \"a4377290-4b37-45fa-b46d-6332f5b02b05\") " pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.172136 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x2jj\" (UniqueName: \"kubernetes.io/projected/a4377290-4b37-45fa-b46d-6332f5b02b05-kube-api-access-9x2jj\") pod \"crc-debug-2kwfs\" (UID: \"a4377290-4b37-45fa-b46d-6332f5b02b05\") " pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.172197 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4377290-4b37-45fa-b46d-6332f5b02b05-host\") pod \"crc-debug-2kwfs\" (UID: \"a4377290-4b37-45fa-b46d-6332f5b02b05\") " pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.172390 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4377290-4b37-45fa-b46d-6332f5b02b05-host\") pod \"crc-debug-2kwfs\" (UID: \"a4377290-4b37-45fa-b46d-6332f5b02b05\") " pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.194313 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x2jj\" (UniqueName: \"kubernetes.io/projected/a4377290-4b37-45fa-b46d-6332f5b02b05-kube-api-access-9x2jj\") pod \"crc-debug-2kwfs\" (UID: \"a4377290-4b37-45fa-b46d-6332f5b02b05\") " pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.216973 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:10:49 crc kubenswrapper[4851]: W0223 14:10:49.248801 4851 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4377290_4b37_45fa_b46d_6332f5b02b05.slice/crio-438d6d3f3d26f88bfa0d8fa6bd5ad357b816168e55a195071b4d0f58b10daed2 WatchSource:0}: Error finding container 438d6d3f3d26f88bfa0d8fa6bd5ad357b816168e55a195071b4d0f58b10daed2: Status 404 returned error can't find the container with id 438d6d3f3d26f88bfa0d8fa6bd5ad357b816168e55a195071b4d0f58b10daed2 Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.289115 4851 scope.go:117] "RemoveContainer" containerID="df5e34e528948b241814c7a4b9df55bc89ab5d4d237c28ac71c7401cb69bae8e" Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.289111 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-cns4d" Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.291295 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" event={"ID":"a4377290-4b37-45fa-b46d-6332f5b02b05","Type":"ContainerStarted","Data":"438d6d3f3d26f88bfa0d8fa6bd5ad357b816168e55a195071b4d0f58b10daed2"} Feb 23 14:10:49 crc kubenswrapper[4851]: I0223 14:10:49.978292 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51fd50a6-ac00-48ff-962a-84e801224c78" path="/var/lib/kubelet/pods/51fd50a6-ac00-48ff-962a-84e801224c78/volumes" Feb 23 14:10:50 crc kubenswrapper[4851]: I0223 14:10:50.301699 4851 generic.go:334] "Generic (PLEG): container finished" podID="a4377290-4b37-45fa-b46d-6332f5b02b05" containerID="105900f5496d06b4fd7fccceef6e76a307c015480e30f1050bdee16d45570483" exitCode=0 Feb 23 14:10:50 crc kubenswrapper[4851]: I0223 14:10:50.301741 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" event={"ID":"a4377290-4b37-45fa-b46d-6332f5b02b05","Type":"ContainerDied","Data":"105900f5496d06b4fd7fccceef6e76a307c015480e30f1050bdee16d45570483"} Feb 23 14:10:50 crc kubenswrapper[4851]: I0223 14:10:50.339601 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5tk5n/crc-debug-2kwfs"] Feb 23 14:10:50 crc kubenswrapper[4851]: I0223 14:10:50.370137 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5tk5n/crc-debug-2kwfs"] Feb 23 14:10:51 crc kubenswrapper[4851]: I0223 14:10:51.428703 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:10:51 crc kubenswrapper[4851]: I0223 14:10:51.618143 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4377290-4b37-45fa-b46d-6332f5b02b05-host\") pod \"a4377290-4b37-45fa-b46d-6332f5b02b05\" (UID: \"a4377290-4b37-45fa-b46d-6332f5b02b05\") " Feb 23 14:10:51 crc kubenswrapper[4851]: I0223 14:10:51.618235 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4377290-4b37-45fa-b46d-6332f5b02b05-host" (OuterVolumeSpecName: "host") pod "a4377290-4b37-45fa-b46d-6332f5b02b05" (UID: "a4377290-4b37-45fa-b46d-6332f5b02b05"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 14:10:51 crc kubenswrapper[4851]: I0223 14:10:51.618295 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x2jj\" (UniqueName: \"kubernetes.io/projected/a4377290-4b37-45fa-b46d-6332f5b02b05-kube-api-access-9x2jj\") pod \"a4377290-4b37-45fa-b46d-6332f5b02b05\" (UID: \"a4377290-4b37-45fa-b46d-6332f5b02b05\") " Feb 23 14:10:51 crc kubenswrapper[4851]: I0223 14:10:51.618839 4851 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4377290-4b37-45fa-b46d-6332f5b02b05-host\") on node \"crc\" DevicePath \"\"" Feb 23 14:10:51 crc kubenswrapper[4851]: I0223 14:10:51.623504 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4377290-4b37-45fa-b46d-6332f5b02b05-kube-api-access-9x2jj" (OuterVolumeSpecName: "kube-api-access-9x2jj") pod "a4377290-4b37-45fa-b46d-6332f5b02b05" (UID: "a4377290-4b37-45fa-b46d-6332f5b02b05"). InnerVolumeSpecName "kube-api-access-9x2jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:10:51 crc kubenswrapper[4851]: I0223 14:10:51.721279 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x2jj\" (UniqueName: \"kubernetes.io/projected/a4377290-4b37-45fa-b46d-6332f5b02b05-kube-api-access-9x2jj\") on node \"crc\" DevicePath \"\"" Feb 23 14:10:51 crc kubenswrapper[4851]: I0223 14:10:51.980906 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4377290-4b37-45fa-b46d-6332f5b02b05" path="/var/lib/kubelet/pods/a4377290-4b37-45fa-b46d-6332f5b02b05/volumes" Feb 23 14:10:52 crc kubenswrapper[4851]: I0223 14:10:52.331061 4851 scope.go:117] "RemoveContainer" containerID="105900f5496d06b4fd7fccceef6e76a307c015480e30f1050bdee16d45570483" Feb 23 14:10:52 crc kubenswrapper[4851]: I0223 14:10:52.331192 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/crc-debug-2kwfs" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.197687 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bdd9b889b-qd9cm_6134ed19-8856-4c53-b30c-eee8089381fb/barbican-api/0.log" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.355352 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6bdd9b889b-qd9cm_6134ed19-8856-4c53-b30c-eee8089381fb/barbican-api-log/0.log" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.398979 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-66757bd65d-pn2zh_42f43676-ccd3-45e3-b729-ab33430aca9a/barbican-keystone-listener/0.log" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.458422 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-66757bd65d-pn2zh_42f43676-ccd3-45e3-b729-ab33430aca9a/barbican-keystone-listener-log/0.log" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.600632 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7546f4466c-vlsxg_fc271fbe-58c9-4eca-adfd-63ff51aa46fa/barbican-worker/0.log" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.623632 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7546f4466c-vlsxg_fc271fbe-58c9-4eca-adfd-63ff51aa46fa/barbican-worker-log/0.log" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.788450 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-drtk9_a83f6021-68fd-4a69-8d49-534de4546eee/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.864217 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b0421c96-8b66-48fd-9778-da16d4eb8ef0/ceilometer-central-agent/0.log" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.932809 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b0421c96-8b66-48fd-9778-da16d4eb8ef0/ceilometer-notification-agent/0.log" Feb 23 14:11:17 crc kubenswrapper[4851]: I0223 14:11:17.991465 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b0421c96-8b66-48fd-9778-da16d4eb8ef0/proxy-httpd/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.036404 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b0421c96-8b66-48fd-9778-da16d4eb8ef0/sg-core/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.182944 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c260317a-0cb6-475e-b780-50f6de86dda2/cinder-api/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.209339 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c260317a-0cb6-475e-b780-50f6de86dda2/cinder-api-log/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.376020 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2894d16c-17aa-4037-afa2-37081858ab01/cinder-scheduler/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.396464 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2894d16c-17aa-4037-afa2-37081858ab01/probe/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.560214 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-qrrpz_1a7542f5-0c08-40fc-a218-f196e7769853/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.611324 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-4fdws_75449ea8-fea6-480f-8a8c-10d24081a76f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.769277 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-vjdpf_b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f/init/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.918091 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-vjdpf_b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f/init/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.941777 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-vjdpf_b9e3b7d0-9a87-4cfd-9a4a-24897893cf5f/dnsmasq-dns/0.log" Feb 23 14:11:18 crc kubenswrapper[4851]: I0223 14:11:18.981997 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-fq8hx_1a88cbca-158c-4879-a5ef-48b9714a4043/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:19 crc kubenswrapper[4851]: I0223 14:11:19.115017 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_839d4518-f84b-4a2c-81eb-c0112da70e71/glance-httpd/0.log" Feb 23 14:11:19 crc kubenswrapper[4851]: I0223 14:11:19.205720 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_839d4518-f84b-4a2c-81eb-c0112da70e71/glance-log/0.log" Feb 23 14:11:19 crc kubenswrapper[4851]: I0223 14:11:19.319320 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_17103db4-b198-4896-8bec-1e1d1bf8efa1/glance-httpd/0.log" Feb 23 14:11:19 crc kubenswrapper[4851]: I0223 14:11:19.431271 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_17103db4-b198-4896-8bec-1e1d1bf8efa1/glance-log/0.log" Feb 23 14:11:19 crc kubenswrapper[4851]: I0223 14:11:19.487444 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-64f4c4f478-f578z_1c52d079-d9d5-469e-9319-08266bea1f82/horizon/0.log" Feb 23 14:11:19 crc kubenswrapper[4851]: I0223 14:11:19.707710 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-4hzzh_fde15470-10ed-44ef-8ba7-a03c9046f828/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:19 crc kubenswrapper[4851]: I0223 14:11:19.912580 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-rxvsh_bc81c277-18fa-44d5-8211-37e2b5ca5069/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:19 crc kubenswrapper[4851]: I0223 14:11:19.921064 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-64f4c4f478-f578z_1c52d079-d9d5-469e-9319-08266bea1f82/horizon-log/0.log" Feb 23 14:11:20 crc kubenswrapper[4851]: I0223 14:11:20.177302 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b7f866994-tdwdz_20cfa6bd-a3d2-4e2c-9655-6b4db78b1771/keystone-api/0.log" Feb 23 14:11:20 crc kubenswrapper[4851]: I0223 14:11:20.218485 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29530921-92sp9_9a2c0f95-a0db-4c27-b0c1-9718d2ea4b3e/keystone-cron/0.log" Feb 23 14:11:20 crc kubenswrapper[4851]: I0223 14:11:20.370185 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a0a55625-8b81-4ce9-afb2-2220598ce375/kube-state-metrics/0.log" Feb 23 14:11:20 crc kubenswrapper[4851]: I0223 14:11:20.440219 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ck6q5_7c2e7e67-6f8c-4e78-ae0a-d1e6fc2b34d8/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:20 crc kubenswrapper[4851]: I0223 14:11:20.818913 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bd58878f7-xhsz6_293a3d32-d143-4600-bb4e-50f2c5783f67/neutron-api/0.log" Feb 23 14:11:20 crc kubenswrapper[4851]: I0223 14:11:20.958285 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7bd58878f7-xhsz6_293a3d32-d143-4600-bb4e-50f2c5783f67/neutron-httpd/0.log" Feb 23 14:11:21 crc kubenswrapper[4851]: I0223 14:11:21.062031 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-f444v_01be1f4b-d5f3-4dbe-b528-118617cdad1e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:21 crc kubenswrapper[4851]: I0223 14:11:21.696295 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f7011aa2-a15d-4c99-b0a1-ae8d530b84c2/nova-cell0-conductor-conductor/0.log" Feb 23 14:11:21 crc kubenswrapper[4851]: I0223 14:11:21.705264 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c5dd0b3-902e-4156-9538-fccbb6f319ae/nova-api-log/0.log" Feb 23 14:11:21 crc kubenswrapper[4851]: I0223 14:11:21.962104 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_b5c64fb8-ea75-4ede-b285-7aeb434b96d4/nova-cell1-conductor-conductor/0.log" Feb 23 14:11:22 crc kubenswrapper[4851]: I0223 14:11:22.073987 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c5dd0b3-902e-4156-9538-fccbb6f319ae/nova-api-api/0.log" Feb 23 14:11:22 crc kubenswrapper[4851]: I0223 14:11:22.113874 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0284ac99-e112-44af-b198-eb9d42478701/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 14:11:22 crc kubenswrapper[4851]: I0223 14:11:22.222426 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-q95kg_85e1b392-9aa6-4cd1-93b0-fa3587de47ac/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:22 crc kubenswrapper[4851]: I0223 14:11:22.496703 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f06b9e12-5e93-4ed8-80f1-733ce28508c1/nova-metadata-log/0.log" Feb 23 14:11:22 crc kubenswrapper[4851]: I0223 14:11:22.775556 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cb3d0e2c-9427-4585-8f01-0e1640feca9a/mysql-bootstrap/0.log" Feb 23 14:11:22 crc kubenswrapper[4851]: I0223 14:11:22.808382 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_dcffae8a-b5fd-49bf-9316-1cc871d0568c/nova-scheduler-scheduler/0.log" Feb 23 14:11:23 crc kubenswrapper[4851]: I0223 14:11:23.044400 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cb3d0e2c-9427-4585-8f01-0e1640feca9a/galera/0.log" Feb 23 14:11:23 crc kubenswrapper[4851]: I0223 14:11:23.062278 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cb3d0e2c-9427-4585-8f01-0e1640feca9a/mysql-bootstrap/0.log" Feb 23 14:11:23 crc kubenswrapper[4851]: I0223 14:11:23.259212 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a0d61403-fda9-4081-8c39-32ff86cc879c/mysql-bootstrap/0.log" Feb 23 14:11:23 crc kubenswrapper[4851]: I0223 14:11:23.435013 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a0d61403-fda9-4081-8c39-32ff86cc879c/mysql-bootstrap/0.log" Feb 23 14:11:23 crc kubenswrapper[4851]: I0223 14:11:23.475568 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a0d61403-fda9-4081-8c39-32ff86cc879c/galera/0.log" Feb 23 14:11:23 crc kubenswrapper[4851]: I0223 14:11:23.659807 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_86b670f3-6886-4a48-b0ec-a109e93c87a0/openstackclient/0.log" Feb 23 14:11:23 crc kubenswrapper[4851]: I0223 14:11:23.771270 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2rf22_f366da8b-d0d3-411e-afec-53af288b0c42/ovn-controller/0.log" Feb 23 14:11:23 crc kubenswrapper[4851]: I0223 14:11:23.831252 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f06b9e12-5e93-4ed8-80f1-733ce28508c1/nova-metadata-metadata/0.log" Feb 23 14:11:23 crc kubenswrapper[4851]: I0223 14:11:23.937560 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bnnms_a4a8e6ca-ded8-4dff-8c84-bdc0193fb65d/openstack-network-exporter/0.log" Feb 23 14:11:24 crc kubenswrapper[4851]: I0223 14:11:24.089747 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-42p6n_d88acd5e-87c7-4b36-9aad-d20d44b7d0bf/ovsdb-server-init/0.log" Feb 23 14:11:24 crc kubenswrapper[4851]: I0223 14:11:24.329596 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-42p6n_d88acd5e-87c7-4b36-9aad-d20d44b7d0bf/ovsdb-server/0.log" Feb 23 14:11:24 crc kubenswrapper[4851]: I0223 14:11:24.362447 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-42p6n_d88acd5e-87c7-4b36-9aad-d20d44b7d0bf/ovs-vswitchd/0.log" Feb 23 14:11:24 crc kubenswrapper[4851]: I0223 14:11:24.392848 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-42p6n_d88acd5e-87c7-4b36-9aad-d20d44b7d0bf/ovsdb-server-init/0.log" Feb 23 14:11:24 crc kubenswrapper[4851]: I0223 14:11:24.606751 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2/openstack-network-exporter/0.log" Feb 23 14:11:24 crc kubenswrapper[4851]: I0223 14:11:24.611539 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-5frpq_b70c39f9-b146-4980-bb34-0034ed5b8b86/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:24 crc kubenswrapper[4851]: I0223 14:11:24.791398 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a19a0748-e3ff-4dc8-8ab7-bff13ca0a5c2/ovn-northd/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.056945 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d6c7eb0b-bab9-47af-b0e9-fd539479e252/ovsdbserver-nb/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.092018 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_d6c7eb0b-bab9-47af-b0e9-fd539479e252/openstack-network-exporter/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.273824 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_68f10652-af07-4024-b8b6-91d8e8974144/openstack-network-exporter/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.369605 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_68f10652-af07-4024-b8b6-91d8e8974144/ovsdbserver-sb/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.415923 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d6d46b468-7drjb_4c14e85a-4380-49f8-8311-abcaa3587c47/placement-api/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.642490 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d2aa1b0e-e4a7-4365-99c9-4e521e896925/setup-container/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.655692 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7d6d46b468-7drjb_4c14e85a-4380-49f8-8311-abcaa3587c47/placement-log/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.847066 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d2aa1b0e-e4a7-4365-99c9-4e521e896925/setup-container/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.855415 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d2aa1b0e-e4a7-4365-99c9-4e521e896925/rabbitmq/0.log" Feb 23 14:11:25 crc kubenswrapper[4851]: I0223 14:11:25.928422 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_44d82832-bb2c-4bfe-a9c0-a22e00484c71/setup-container/0.log" Feb 23 14:11:26 crc kubenswrapper[4851]: I0223 14:11:26.187992 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_44d82832-bb2c-4bfe-a9c0-a22e00484c71/rabbitmq/0.log" Feb 23 14:11:26 crc kubenswrapper[4851]: I0223 14:11:26.188499 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_44d82832-bb2c-4bfe-a9c0-a22e00484c71/setup-container/0.log" Feb 23 14:11:26 crc kubenswrapper[4851]: I0223 14:11:26.250968 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5z9fm_964cf639-e7d3-402e-80f9-d8d27ebf5db7/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:26 crc kubenswrapper[4851]: I0223 14:11:26.455464 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zgsdq_7dc4f23f-fc11-4cf6-9740-ec259ac3823e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:26 crc kubenswrapper[4851]: I0223 14:11:26.511561 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rmzcj_0a0aa0bc-787f-43f3-a2fe-8e90b3b2658a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:26 crc kubenswrapper[4851]: I0223 14:11:26.739898 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kd2p6_3a4f0b71-7653-49fa-9155-3e0d4197e087/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:26 crc kubenswrapper[4851]: I0223 14:11:26.750966 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-thkvh_6d0c05be-7530-42cf-86e9-e0d67e24ce4d/ssh-known-hosts-edpm-deployment/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.034483 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5ccf5dc859-8drcp_70c040ea-0409-4501-9416-f1f40c5c6882/proxy-server/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.115037 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5ccf5dc859-8drcp_70c040ea-0409-4501-9416-f1f40c5c6882/proxy-httpd/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.146308 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-947gb_365ea813-ed43-4771-a20a-d8ad58487d86/swift-ring-rebalance/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.368366 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/account-auditor/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.406801 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/account-reaper/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.436501 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/account-replicator/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.632010 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/container-replicator/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.663755 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/container-auditor/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.702077 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/account-server/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.750738 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/container-server/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.881656 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/container-updater/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.943109 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-expirer/0.log" Feb 23 14:11:27 crc kubenswrapper[4851]: I0223 14:11:27.944454 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-auditor/0.log" Feb 23 14:11:28 crc kubenswrapper[4851]: I0223 14:11:28.026156 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-replicator/0.log" Feb 23 14:11:28 crc kubenswrapper[4851]: I0223 14:11:28.121406 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-server/0.log" Feb 23 14:11:28 crc kubenswrapper[4851]: I0223 14:11:28.144758 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/object-updater/0.log" Feb 23 14:11:28 crc kubenswrapper[4851]: I0223 14:11:28.189462 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/rsync/0.log" Feb 23 14:11:28 crc kubenswrapper[4851]: I0223 14:11:28.270482 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e3cd939-1e76-4a55-bb7b-614ae880e79c/swift-recon-cron/0.log" Feb 23 14:11:28 crc kubenswrapper[4851]: I0223 14:11:28.463720 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-mhk45_ec787d1d-3f44-445b-a2ad-0d0b9ce7f476/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:28 crc kubenswrapper[4851]: I0223 14:11:28.547470 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_85d7dda0-1545-4b56-9694-c704cfec078c/tempest-tests-tempest-tests-runner/0.log" Feb 23 14:11:28 crc kubenswrapper[4851]: I0223 14:11:28.721047 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_68dbe829-aaf9-45eb-9b13-1c7e73a34cb6/test-operator-logs-container/0.log" Feb 23 14:11:28 crc kubenswrapper[4851]: I0223 14:11:28.849213 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-7phqr_33d023b8-6967-4bc9-813e-08892dfa7107/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 14:11:38 crc kubenswrapper[4851]: I0223 14:11:38.172675 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_aee353b8-8a37-4055-a016-2c1aac2cf20b/memcached/0.log" Feb 23 14:11:54 crc kubenswrapper[4851]: I0223 14:11:54.648940 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/util/0.log" Feb 23 14:11:54 crc kubenswrapper[4851]: I0223 14:11:54.823732 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/util/0.log" Feb 23 14:11:54 crc kubenswrapper[4851]: I0223 14:11:54.843440 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/pull/0.log" Feb 23 14:11:54 crc kubenswrapper[4851]: I0223 14:11:54.924163 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/pull/0.log" Feb 23 14:11:55 crc kubenswrapper[4851]: I0223 14:11:55.121097 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/extract/0.log" Feb 23 14:11:55 crc kubenswrapper[4851]: I0223 14:11:55.122801 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/util/0.log" Feb 23 14:11:55 crc kubenswrapper[4851]: I0223 14:11:55.130682 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8f1e381ae74457d3e6fdccc5d11bd0c4ebcdc4b32049a25cdcf860eddarmdbh_76eb9096-ceb3-4f9e-8dea-2fce146af5c0/pull/0.log" Feb 23 14:11:55 crc kubenswrapper[4851]: I0223 14:11:55.611830 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-k8pws_115cc313-eea6-40cd-9e8a-a7205e83cc07/manager/0.log" Feb 23 14:11:55 crc kubenswrapper[4851]: I0223 14:11:55.946505 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-rm79x_f17a63ea-4b87-429b-8c90-58790c572b9e/manager/0.log" Feb 23 14:11:56 crc kubenswrapper[4851]: I0223 14:11:56.127999 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-tvt8g_6fb817cf-5b9d-4879-a997-cd3f1d99db3c/manager/0.log" Feb 23 14:11:56 crc kubenswrapper[4851]: I0223 14:11:56.443567 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-8wdqc_9abe19ef-7cfa-43dd-983c-bcef5a540100/manager/0.log" Feb 23 14:11:56 crc kubenswrapper[4851]: I0223 14:11:56.860784 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-bctck_ef730879-0a7d-4e4a-925e-8ef30c366d64/manager/0.log" Feb 23 14:11:56 crc kubenswrapper[4851]: I0223 14:11:56.936449 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-2xlr5_ca30fe6b-5b33-4e6e-acb5-93a49ae9257d/manager/0.log" Feb 23 14:11:57 crc kubenswrapper[4851]: I0223 14:11:57.050742 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-b827v_834a522f-ca03-403d-8402-679845f7c6c3/manager/0.log" Feb 23 14:11:57 crc kubenswrapper[4851]: I0223 14:11:57.193857 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-pd5bf_71fd5f4f-a9fc-4242-813a-3fb7d5827c41/manager/0.log" Feb 23 14:11:57 crc kubenswrapper[4851]: I0223 14:11:57.268797 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-sd26k_0dbb5228-ae4a-427d-97a2-3768b460e134/manager/0.log" Feb 23 14:11:57 crc kubenswrapper[4851]: I0223 14:11:57.547314 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-vrjqg_40f2272b-7e63-4666-b858-9722a0af16c8/manager/0.log" Feb 23 14:11:57 crc kubenswrapper[4851]: I0223 14:11:57.704025 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-rbgkf_84a8d9f7-24b2-4f08-a917-b614dc537ffe/manager/0.log" Feb 23 14:11:58 crc kubenswrapper[4851]: I0223 14:11:58.019797 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-65dws_33df439b-30ca-4397-a992-be2de607477a/manager/0.log" Feb 23 14:11:58 crc kubenswrapper[4851]: I0223 14:11:58.058457 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-x2gtd_fbc0edce-88b5-4ddc-8495-01e33e7a7753/manager/0.log" Feb 23 14:11:58 crc kubenswrapper[4851]: I0223 14:11:58.242692 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cs9lrm_e289a048-8c1a-4349-8b3b-8f3628e23bdc/manager/0.log" Feb 23 14:11:58 crc kubenswrapper[4851]: I0223 14:11:58.558370 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-567cd64b9b-qlxlf_5723937f-de2b-455f-9015-e13595ee88e3/operator/0.log" Feb 23 14:11:58 crc kubenswrapper[4851]: I0223 14:11:58.785283 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rfcf2_edc46ca6-ff8f-4f31-981d-633b7a3766b1/registry-server/0.log" Feb 23 14:11:59 crc kubenswrapper[4851]: I0223 14:11:59.151153 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-x9kh8_ddcb4697-a6af-4baa-bd78-ae1f3b47c6af/manager/0.log" Feb 23 14:11:59 crc kubenswrapper[4851]: I0223 14:11:59.220842 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-hktpx_f9f540e9-5c10-4e33-b283-328276817914/manager/0.log" Feb 23 14:11:59 crc kubenswrapper[4851]: I0223 14:11:59.407347 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-sv2f9_18ea2332-4904-4213-9ba2-c678a2125b37/operator/0.log" Feb 23 14:11:59 crc kubenswrapper[4851]: I0223 14:11:59.675203 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-bwbpw_946a66f3-be29-4e8b-a800-637ef24a5694/manager/0.log" Feb 23 14:11:59 crc kubenswrapper[4851]: I0223 14:11:59.813292 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-khm2l_2a0bac92-ab56-4f67-a3a8-09ea4de25ae5/manager/0.log" Feb 23 14:11:59 crc kubenswrapper[4851]: I0223 14:11:59.867694 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-zktk2_6769f01c-bcc7-4e3e-a791-0fa315f82b37/manager/0.log" Feb 23 14:12:00 crc kubenswrapper[4851]: I0223 14:12:00.111560 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-gdn69_ea9b35cb-5758-42d8-8877-ceb1e19eb751/manager/0.log" Feb 23 14:12:00 crc kubenswrapper[4851]: I0223 14:12:00.633448 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68bc894585-xr5dt_7a7fd548-a78f-4096-b68a-2bc28b937e96/manager/0.log" Feb 23 14:12:02 crc kubenswrapper[4851]: I0223 14:12:02.770734 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-nhj9r_c1c9227e-ff98-4005-ba5c-e2cfa2f9bb44/manager/0.log" Feb 23 14:12:11 crc kubenswrapper[4851]: I0223 14:12:11.924509 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:12:11 crc kubenswrapper[4851]: I0223 14:12:11.925078 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 14:12:18 crc kubenswrapper[4851]: I0223 14:12:18.546132 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-klwfn_62353140-dab7-459f-b0d4-c796087cb3f9/control-plane-machine-set-operator/0.log" Feb 23 14:12:18 crc kubenswrapper[4851]: I0223 14:12:18.719300 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dw6fk_8f8399a9-b50e-4ccb-8ab8-3e245ab4f229/kube-rbac-proxy/0.log" Feb 23 14:12:18 crc kubenswrapper[4851]: I0223 14:12:18.743674 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dw6fk_8f8399a9-b50e-4ccb-8ab8-3e245ab4f229/machine-api-operator/0.log" Feb 23 14:12:30 crc kubenswrapper[4851]: I0223 14:12:30.405795 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-hrfw4_4e7d2a84-a59b-4489-98c0-78b2b3dc607c/cert-manager-controller/0.log" Feb 23 14:12:30 crc kubenswrapper[4851]: I0223 14:12:30.593907 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-qwxvn_af206ef2-9ee3-4eeb-81c6-5a82bef57eb0/cert-manager-cainjector/0.log" Feb 23 14:12:30 crc kubenswrapper[4851]: I0223 14:12:30.628791 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-k2vvq_b8e00a19-b1f6-4672-84e7-cc8abd468123/cert-manager-webhook/0.log" Feb 23 14:12:41 crc kubenswrapper[4851]: I0223 14:12:41.648069 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-qm7j9_cce85f53-7343-48af-8c40-275e87fbc140/nmstate-console-plugin/0.log" Feb 23 14:12:41 crc kubenswrapper[4851]: I0223 14:12:41.842101 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jmkxx_c4210a5b-8df0-4ccc-9811-5a2a831c2fa1/kube-rbac-proxy/0.log" Feb 23 14:12:41 crc kubenswrapper[4851]: I0223 14:12:41.851251 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gdkg9_375d9b3b-340d-4b74-b352-74ac68607ad8/nmstate-handler/0.log" Feb 23 14:12:41 crc kubenswrapper[4851]: I0223 14:12:41.918557 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-jmkxx_c4210a5b-8df0-4ccc-9811-5a2a831c2fa1/nmstate-metrics/0.log" Feb 23 14:12:41 crc kubenswrapper[4851]: I0223 14:12:41.925023 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:12:41 crc kubenswrapper[4851]: I0223 14:12:41.925070 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 14:12:42 crc kubenswrapper[4851]: I0223 14:12:42.024766 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-4k8hs_238cc1b8-1f38-43fa-92ca-bf3561e793fd/nmstate-operator/0.log" Feb 23 14:12:42 crc kubenswrapper[4851]: I0223 14:12:42.118711 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-mf27m_730c196a-16c5-4564-a5e1-db3f9fdd31d7/nmstate-webhook/0.log" Feb 23 14:13:08 crc kubenswrapper[4851]: I0223 14:13:08.259000 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-88gf5_2911e001-3b48-4ffc-9681-100739828235/kube-rbac-proxy/0.log" Feb 23 14:13:08 crc kubenswrapper[4851]: I0223 14:13:08.382398 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-88gf5_2911e001-3b48-4ffc-9681-100739828235/controller/0.log" Feb 23 14:13:08 crc kubenswrapper[4851]: I0223 14:13:08.499513 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-frr-files/0.log" Feb 23 14:13:08 crc kubenswrapper[4851]: I0223 14:13:08.685104 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-reloader/0.log" Feb 23 14:13:08 crc kubenswrapper[4851]: I0223 14:13:08.694056 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-metrics/0.log" Feb 23 14:13:08 crc kubenswrapper[4851]: I0223 14:13:08.712938 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-frr-files/0.log" Feb 23 14:13:08 crc kubenswrapper[4851]: I0223 14:13:08.776130 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-reloader/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.484886 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-reloader/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.484985 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-metrics/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.513585 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-frr-files/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.537375 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-metrics/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.690880 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-frr-files/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.721074 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-metrics/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.734181 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/cp-reloader/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.840534 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/controller/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.943661 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/kube-rbac-proxy/0.log" Feb 23 14:13:09 crc kubenswrapper[4851]: I0223 14:13:09.986621 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/frr-metrics/0.log" Feb 23 14:13:10 crc kubenswrapper[4851]: I0223 14:13:10.105419 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/kube-rbac-proxy-frr/0.log" Feb 23 14:13:10 crc kubenswrapper[4851]: I0223 14:13:10.158989 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/reloader/0.log" Feb 23 14:13:10 crc kubenswrapper[4851]: I0223 14:13:10.369712 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-8qsz9_21b51896-5127-4eef-8f88-87b1e811103c/frr-k8s-webhook-server/0.log" Feb 23 14:13:10 crc kubenswrapper[4851]: I0223 14:13:10.493154 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58d4d555d4-9b64v_0926a535-2dd4-4e82-9bff-6f806330985a/manager/0.log" Feb 23 14:13:10 crc kubenswrapper[4851]: I0223 14:13:10.672433 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5d6f8cc6fd-wcv5v_e9ea5798-bfec-4380-b5db-eee20abfe719/webhook-server/0.log" Feb 23 14:13:10 crc kubenswrapper[4851]: I0223 14:13:10.836062 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fvlxq_44188c33-1cb1-4c27-8314-4431469de3bb/kube-rbac-proxy/0.log" Feb 23 14:13:11 crc kubenswrapper[4851]: I0223 14:13:11.401064 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hb2x8_033fbbfa-b771-4acb-a64c-7212064277b3/frr/0.log" Feb 23 14:13:11 crc kubenswrapper[4851]: I0223 14:13:11.425129 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fvlxq_44188c33-1cb1-4c27-8314-4431469de3bb/speaker/0.log" Feb 23 14:13:11 crc kubenswrapper[4851]: I0223 14:13:11.925159 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:13:11 crc kubenswrapper[4851]: I0223 14:13:11.925538 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 14:13:11 crc kubenswrapper[4851]: I0223 14:13:11.925593 4851 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-npswg" Feb 23 14:13:11 crc kubenswrapper[4851]: I0223 14:13:11.926387 4851 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b"} pod="openshift-machine-config-operator/machine-config-daemon-npswg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 14:13:11 crc kubenswrapper[4851]: I0223 14:13:11.926441 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" containerID="cri-o://90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" gracePeriod=600 Feb 23 14:13:12 crc kubenswrapper[4851]: E0223 14:13:12.067061 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:13:12 crc kubenswrapper[4851]: I0223 14:13:12.589119 4851 generic.go:334] "Generic (PLEG): container finished" podID="c5a296ee-a904-4283-8849-65abb16717b4" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" exitCode=0 Feb 23 14:13:12 crc kubenswrapper[4851]: I0223 14:13:12.589185 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerDied","Data":"90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b"} Feb 23 14:13:12 crc kubenswrapper[4851]: I0223 14:13:12.589251 4851 scope.go:117] "RemoveContainer" containerID="d2608ceb581f02745decb9aaeafec5770a4f1df96a11f0114b46173bf46dba1a" Feb 23 14:13:12 crc kubenswrapper[4851]: I0223 14:13:12.590113 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:13:12 crc kubenswrapper[4851]: E0223 14:13:12.590539 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:13:24 crc kubenswrapper[4851]: I0223 14:13:24.376043 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/util/0.log" Feb 23 14:13:24 crc kubenswrapper[4851]: I0223 14:13:24.631760 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/pull/0.log" Feb 23 14:13:24 crc kubenswrapper[4851]: I0223 14:13:24.634250 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/pull/0.log" Feb 23 14:13:24 crc kubenswrapper[4851]: I0223 14:13:24.636709 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/util/0.log" Feb 23 14:13:24 crc kubenswrapper[4851]: I0223 14:13:24.786509 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/util/0.log" Feb 23 14:13:24 crc kubenswrapper[4851]: I0223 14:13:24.824583 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/pull/0.log" Feb 23 14:13:24 crc kubenswrapper[4851]: I0223 14:13:24.849237 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2139lh9j_4f747063-8a9c-4fa9-8af3-4b832b22dd24/extract/0.log" Feb 23 14:13:24 crc kubenswrapper[4851]: I0223 14:13:24.996878 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-utilities/0.log" Feb 23 14:13:25 crc kubenswrapper[4851]: I0223 14:13:25.206506 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-utilities/0.log" Feb 23 14:13:25 crc kubenswrapper[4851]: I0223 14:13:25.270917 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-content/0.log" Feb 23 14:13:25 crc kubenswrapper[4851]: I0223 14:13:25.294045 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-content/0.log" Feb 23 14:13:25 crc kubenswrapper[4851]: I0223 14:13:25.427094 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-content/0.log" Feb 23 14:13:25 crc kubenswrapper[4851]: I0223 14:13:25.431008 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/extract-utilities/0.log" Feb 23 14:13:25 crc kubenswrapper[4851]: I0223 14:13:25.714261 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-utilities/0.log" Feb 23 14:13:26 crc kubenswrapper[4851]: I0223 14:13:26.005947 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpp9t_5445da7a-b2cb-477c-99aa-e70e2f61dd70/registry-server/0.log" Feb 23 14:13:26 crc kubenswrapper[4851]: I0223 14:13:26.224569 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-content/0.log" Feb 23 14:13:26 crc kubenswrapper[4851]: I0223 14:13:26.230212 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-utilities/0.log" Feb 23 14:13:26 crc kubenswrapper[4851]: I0223 14:13:26.237863 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-content/0.log" Feb 23 14:13:26 crc kubenswrapper[4851]: I0223 14:13:26.451640 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-utilities/0.log" Feb 23 14:13:26 crc kubenswrapper[4851]: I0223 14:13:26.494098 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/extract-content/0.log" Feb 23 14:13:26 crc kubenswrapper[4851]: I0223 14:13:26.717220 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/util/0.log" Feb 23 14:13:26 crc kubenswrapper[4851]: I0223 14:13:26.956283 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/util/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.012948 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/pull/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.058276 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/pull/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.211792 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rr62g_899927dd-1984-4973-94f5-e53fac8948ab/registry-server/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.240631 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/util/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.292762 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/extract/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.309430 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecav8bqv_79afbe6e-8ae5-4f33-b520-6f24ba3f44b2/pull/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.551578 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jxzb2_0d8139b6-0c9b-48cf-b664-44304568f2d1/marketplace-operator/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.648179 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-utilities/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.856524 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-utilities/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.887648 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-content/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.918148 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-content/0.log" Feb 23 14:13:27 crc kubenswrapper[4851]: I0223 14:13:27.969437 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:13:27 crc kubenswrapper[4851]: E0223 14:13:27.969787 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:13:28 crc kubenswrapper[4851]: I0223 14:13:28.146282 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-content/0.log" Feb 23 14:13:28 crc kubenswrapper[4851]: I0223 14:13:28.173353 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/extract-utilities/0.log" Feb 23 14:13:28 crc kubenswrapper[4851]: I0223 14:13:28.351794 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-f2wl8_73698ae1-5cf2-41c6-99f8-0e943404b97f/registry-server/0.log" Feb 23 14:13:28 crc kubenswrapper[4851]: I0223 14:13:28.375594 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-utilities/0.log" Feb 23 14:13:28 crc kubenswrapper[4851]: I0223 14:13:28.528162 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-utilities/0.log" Feb 23 14:13:28 crc kubenswrapper[4851]: I0223 14:13:28.537824 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-content/0.log" Feb 23 14:13:28 crc kubenswrapper[4851]: I0223 14:13:28.585804 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-content/0.log" Feb 23 14:13:28 crc kubenswrapper[4851]: I0223 14:13:28.780531 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-utilities/0.log" Feb 23 14:13:28 crc kubenswrapper[4851]: I0223 14:13:28.806455 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/extract-content/0.log" Feb 23 14:13:29 crc kubenswrapper[4851]: I0223 14:13:29.362540 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dwcng_81924658-5ad1-41ab-ac76-c807fc665048/registry-server/0.log" Feb 23 14:13:42 crc kubenswrapper[4851]: I0223 14:13:42.969100 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:13:42 crc kubenswrapper[4851]: E0223 14:13:42.969890 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:13:56 crc kubenswrapper[4851]: I0223 14:13:56.969025 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:13:56 crc kubenswrapper[4851]: E0223 14:13:56.969899 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:14:08 crc kubenswrapper[4851]: I0223 14:14:08.968543 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:14:08 crc kubenswrapper[4851]: E0223 14:14:08.969465 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:14:22 crc kubenswrapper[4851]: I0223 14:14:22.969618 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:14:22 crc kubenswrapper[4851]: E0223 14:14:22.970731 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:14:34 crc kubenswrapper[4851]: I0223 14:14:34.968962 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:14:34 crc kubenswrapper[4851]: E0223 14:14:34.969762 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:14:45 crc kubenswrapper[4851]: I0223 14:14:45.977512 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:14:45 crc kubenswrapper[4851]: E0223 14:14:45.978120 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:14:59 crc kubenswrapper[4851]: I0223 14:14:59.969431 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:14:59 crc kubenswrapper[4851]: E0223 14:14:59.970272 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.172222 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc"] Feb 23 14:15:00 crc kubenswrapper[4851]: E0223 14:15:00.173010 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4377290-4b37-45fa-b46d-6332f5b02b05" containerName="container-00" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.173066 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4377290-4b37-45fa-b46d-6332f5b02b05" containerName="container-00" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.173375 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4377290-4b37-45fa-b46d-6332f5b02b05" containerName="container-00" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.174004 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.178169 4851 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.178791 4851 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.180945 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc"] Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.366849 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08946131-43fe-4658-85d2-997e985a86a1-secret-volume\") pod \"collect-profiles-29530935-4mnpc\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.366986 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08946131-43fe-4658-85d2-997e985a86a1-config-volume\") pod \"collect-profiles-29530935-4mnpc\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.367090 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9f76\" (UniqueName: \"kubernetes.io/projected/08946131-43fe-4658-85d2-997e985a86a1-kube-api-access-w9f76\") pod \"collect-profiles-29530935-4mnpc\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.469277 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08946131-43fe-4658-85d2-997e985a86a1-secret-volume\") pod \"collect-profiles-29530935-4mnpc\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.469453 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08946131-43fe-4658-85d2-997e985a86a1-config-volume\") pod \"collect-profiles-29530935-4mnpc\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.469491 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9f76\" (UniqueName: \"kubernetes.io/projected/08946131-43fe-4658-85d2-997e985a86a1-kube-api-access-w9f76\") pod \"collect-profiles-29530935-4mnpc\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.470345 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08946131-43fe-4658-85d2-997e985a86a1-config-volume\") pod \"collect-profiles-29530935-4mnpc\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.657577 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08946131-43fe-4658-85d2-997e985a86a1-secret-volume\") pod \"collect-profiles-29530935-4mnpc\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.658132 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9f76\" (UniqueName: \"kubernetes.io/projected/08946131-43fe-4658-85d2-997e985a86a1-kube-api-access-w9f76\") pod \"collect-profiles-29530935-4mnpc\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:00 crc kubenswrapper[4851]: I0223 14:15:00.797669 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:01 crc kubenswrapper[4851]: I0223 14:15:01.226672 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc"] Feb 23 14:15:01 crc kubenswrapper[4851]: I0223 14:15:01.560897 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" event={"ID":"08946131-43fe-4658-85d2-997e985a86a1","Type":"ContainerStarted","Data":"093e7bf99ab54e28c2ed099fc0aa2dc4091ff80282e331f4237cdfc3c3de7c2e"} Feb 23 14:15:01 crc kubenswrapper[4851]: I0223 14:15:01.561272 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" event={"ID":"08946131-43fe-4658-85d2-997e985a86a1","Type":"ContainerStarted","Data":"70b48e7444d16a519fc64dfb17f0363505dfdc33712afe4381f66d365031140f"} Feb 23 14:15:01 crc kubenswrapper[4851]: I0223 14:15:01.586131 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" podStartSLOduration=1.58611023 podStartE2EDuration="1.58611023s" podCreationTimestamp="2026-02-23 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 14:15:01.575895181 +0000 UTC m=+4056.257598859" watchObservedRunningTime="2026-02-23 14:15:01.58611023 +0000 UTC m=+4056.267813898" Feb 23 14:15:02 crc kubenswrapper[4851]: I0223 14:15:02.572635 4851 generic.go:334] "Generic (PLEG): container finished" podID="08946131-43fe-4658-85d2-997e985a86a1" containerID="093e7bf99ab54e28c2ed099fc0aa2dc4091ff80282e331f4237cdfc3c3de7c2e" exitCode=0 Feb 23 14:15:02 crc kubenswrapper[4851]: I0223 14:15:02.572722 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" event={"ID":"08946131-43fe-4658-85d2-997e985a86a1","Type":"ContainerDied","Data":"093e7bf99ab54e28c2ed099fc0aa2dc4091ff80282e331f4237cdfc3c3de7c2e"} Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.062004 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.236094 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08946131-43fe-4658-85d2-997e985a86a1-config-volume\") pod \"08946131-43fe-4658-85d2-997e985a86a1\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.236481 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08946131-43fe-4658-85d2-997e985a86a1-secret-volume\") pod \"08946131-43fe-4658-85d2-997e985a86a1\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.236672 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9f76\" (UniqueName: \"kubernetes.io/projected/08946131-43fe-4658-85d2-997e985a86a1-kube-api-access-w9f76\") pod \"08946131-43fe-4658-85d2-997e985a86a1\" (UID: \"08946131-43fe-4658-85d2-997e985a86a1\") " Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.236936 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08946131-43fe-4658-85d2-997e985a86a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "08946131-43fe-4658-85d2-997e985a86a1" (UID: "08946131-43fe-4658-85d2-997e985a86a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.243080 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08946131-43fe-4658-85d2-997e985a86a1-kube-api-access-w9f76" (OuterVolumeSpecName: "kube-api-access-w9f76") pod "08946131-43fe-4658-85d2-997e985a86a1" (UID: "08946131-43fe-4658-85d2-997e985a86a1"). InnerVolumeSpecName "kube-api-access-w9f76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.250504 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08946131-43fe-4658-85d2-997e985a86a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "08946131-43fe-4658-85d2-997e985a86a1" (UID: "08946131-43fe-4658-85d2-997e985a86a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.297390 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk"] Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.307887 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29530890-thbnk"] Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.340437 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9f76\" (UniqueName: \"kubernetes.io/projected/08946131-43fe-4658-85d2-997e985a86a1-kube-api-access-w9f76\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.340656 4851 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08946131-43fe-4658-85d2-997e985a86a1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.340743 4851 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/08946131-43fe-4658-85d2-997e985a86a1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.589506 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" event={"ID":"08946131-43fe-4658-85d2-997e985a86a1","Type":"ContainerDied","Data":"70b48e7444d16a519fc64dfb17f0363505dfdc33712afe4381f66d365031140f"} Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.589545 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70b48e7444d16a519fc64dfb17f0363505dfdc33712afe4381f66d365031140f" Feb 23 14:15:04 crc kubenswrapper[4851]: I0223 14:15:04.589549 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29530935-4mnpc" Feb 23 14:15:05 crc kubenswrapper[4851]: I0223 14:15:05.987697 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="274dfffc-b683-497e-b4f5-42454c1bda65" path="/var/lib/kubelet/pods/274dfffc-b683-497e-b4f5-42454c1bda65/volumes" Feb 23 14:15:10 crc kubenswrapper[4851]: I0223 14:15:10.968646 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:15:10 crc kubenswrapper[4851]: E0223 14:15:10.969533 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:15:20 crc kubenswrapper[4851]: I0223 14:15:20.717359 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" event={"ID":"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59","Type":"ContainerDied","Data":"39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1"} Feb 23 14:15:20 crc kubenswrapper[4851]: I0223 14:15:20.717385 4851 generic.go:334] "Generic (PLEG): container finished" podID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" containerID="39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1" exitCode=0 Feb 23 14:15:20 crc kubenswrapper[4851]: I0223 14:15:20.718534 4851 scope.go:117] "RemoveContainer" containerID="39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1" Feb 23 14:15:21 crc kubenswrapper[4851]: I0223 14:15:21.530771 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5tk5n_must-gather-gwkqn_e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59/gather/0.log" Feb 23 14:15:25 crc kubenswrapper[4851]: I0223 14:15:25.980373 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:15:25 crc kubenswrapper[4851]: E0223 14:15:25.981185 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.689038 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ltwx6"] Feb 23 14:15:26 crc kubenswrapper[4851]: E0223 14:15:26.689761 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08946131-43fe-4658-85d2-997e985a86a1" containerName="collect-profiles" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.689778 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="08946131-43fe-4658-85d2-997e985a86a1" containerName="collect-profiles" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.689962 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="08946131-43fe-4658-85d2-997e985a86a1" containerName="collect-profiles" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.691189 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.731920 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltwx6"] Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.769753 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-catalog-content\") pod \"community-operators-ltwx6\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.769845 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-utilities\") pod \"community-operators-ltwx6\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.769880 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mzpm\" (UniqueName: \"kubernetes.io/projected/3922734a-a713-42ac-a17d-4f89dd5cea38-kube-api-access-6mzpm\") pod \"community-operators-ltwx6\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.872999 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-utilities\") pod \"community-operators-ltwx6\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.873055 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mzpm\" (UniqueName: \"kubernetes.io/projected/3922734a-a713-42ac-a17d-4f89dd5cea38-kube-api-access-6mzpm\") pod \"community-operators-ltwx6\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.873181 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-catalog-content\") pod \"community-operators-ltwx6\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.873665 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-utilities\") pod \"community-operators-ltwx6\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.873665 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-catalog-content\") pod \"community-operators-ltwx6\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:26 crc kubenswrapper[4851]: I0223 14:15:26.896000 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mzpm\" (UniqueName: \"kubernetes.io/projected/3922734a-a713-42ac-a17d-4f89dd5cea38-kube-api-access-6mzpm\") pod \"community-operators-ltwx6\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:27 crc kubenswrapper[4851]: I0223 14:15:27.022496 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:27 crc kubenswrapper[4851]: I0223 14:15:27.511186 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ltwx6"] Feb 23 14:15:27 crc kubenswrapper[4851]: I0223 14:15:27.802582 4851 generic.go:334] "Generic (PLEG): container finished" podID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerID="e1883396f2da508cf942889a78bc8e4ecbbf856a2e11e8b0663db0db000e9822" exitCode=0 Feb 23 14:15:27 crc kubenswrapper[4851]: I0223 14:15:27.802754 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltwx6" event={"ID":"3922734a-a713-42ac-a17d-4f89dd5cea38","Type":"ContainerDied","Data":"e1883396f2da508cf942889a78bc8e4ecbbf856a2e11e8b0663db0db000e9822"} Feb 23 14:15:27 crc kubenswrapper[4851]: I0223 14:15:27.802855 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltwx6" event={"ID":"3922734a-a713-42ac-a17d-4f89dd5cea38","Type":"ContainerStarted","Data":"83c679d64287a2070aac10f7ad8bf9e7069277137cc055bc4b3ac852f671643e"} Feb 23 14:15:27 crc kubenswrapper[4851]: I0223 14:15:27.804630 4851 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 14:15:29 crc kubenswrapper[4851]: I0223 14:15:29.820982 4851 generic.go:334] "Generic (PLEG): container finished" podID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerID="5ad7f07eac52a465103ac0c1c6a338e6893602de4f7162e1a402bd65a7c63695" exitCode=0 Feb 23 14:15:29 crc kubenswrapper[4851]: I0223 14:15:29.821084 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltwx6" event={"ID":"3922734a-a713-42ac-a17d-4f89dd5cea38","Type":"ContainerDied","Data":"5ad7f07eac52a465103ac0c1c6a338e6893602de4f7162e1a402bd65a7c63695"} Feb 23 14:15:31 crc kubenswrapper[4851]: I0223 14:15:31.839083 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltwx6" event={"ID":"3922734a-a713-42ac-a17d-4f89dd5cea38","Type":"ContainerStarted","Data":"3851c82ba50ef068b1434a64545cff35864e46e6e7c5ac5f17b77018c63eafb3"} Feb 23 14:15:31 crc kubenswrapper[4851]: I0223 14:15:31.867982 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ltwx6" podStartSLOduration=3.272318724 podStartE2EDuration="5.867963423s" podCreationTimestamp="2026-02-23 14:15:26 +0000 UTC" firstStartedPulling="2026-02-23 14:15:27.804445679 +0000 UTC m=+4082.486149357" lastFinishedPulling="2026-02-23 14:15:30.400090388 +0000 UTC m=+4085.081794056" observedRunningTime="2026-02-23 14:15:31.857391514 +0000 UTC m=+4086.539095212" watchObservedRunningTime="2026-02-23 14:15:31.867963423 +0000 UTC m=+4086.549667101" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.062902 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5tk5n/must-gather-gwkqn"] Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.063139 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" podUID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" containerName="copy" containerID="cri-o://f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675" gracePeriod=2 Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.072317 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5tk5n/must-gather-gwkqn"] Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.564017 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5tk5n_must-gather-gwkqn_e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59/copy/0.log" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.564841 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.694174 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-must-gather-output\") pod \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\" (UID: \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\") " Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.694276 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvscn\" (UniqueName: \"kubernetes.io/projected/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-kube-api-access-fvscn\") pod \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\" (UID: \"e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59\") " Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.699791 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-kube-api-access-fvscn" (OuterVolumeSpecName: "kube-api-access-fvscn") pod "e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" (UID: "e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59"). InnerVolumeSpecName "kube-api-access-fvscn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.796440 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvscn\" (UniqueName: \"kubernetes.io/projected/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-kube-api-access-fvscn\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.834282 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" (UID: "e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.856167 4851 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5tk5n_must-gather-gwkqn_e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59/copy/0.log" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.856708 4851 generic.go:334] "Generic (PLEG): container finished" podID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" containerID="f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675" exitCode=143 Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.856764 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5tk5n/must-gather-gwkqn" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.856775 4851 scope.go:117] "RemoveContainer" containerID="f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.878902 4851 scope.go:117] "RemoveContainer" containerID="39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.897852 4851 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.948593 4851 scope.go:117] "RemoveContainer" containerID="f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675" Feb 23 14:15:33 crc kubenswrapper[4851]: E0223 14:15:33.949179 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675\": container with ID starting with f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675 not found: ID does not exist" containerID="f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.949244 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675"} err="failed to get container status \"f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675\": rpc error: code = NotFound desc = could not find container \"f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675\": container with ID starting with f2cb19dab10f7a1e19013f494d0ec727fb24b4bc4b9ebd77e24273d4f7cdc675 not found: ID does not exist" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.949279 4851 scope.go:117] "RemoveContainer" containerID="39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1" Feb 23 14:15:33 crc kubenswrapper[4851]: E0223 14:15:33.949765 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1\": container with ID starting with 39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1 not found: ID does not exist" containerID="39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.949787 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1"} err="failed to get container status \"39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1\": rpc error: code = NotFound desc = could not find container \"39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1\": container with ID starting with 39bd68b82414eb5b560b8530bd51b7449d4c018a8a588b18eb09b68d6c3bddf1 not found: ID does not exist" Feb 23 14:15:33 crc kubenswrapper[4851]: I0223 14:15:33.982264 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" path="/var/lib/kubelet/pods/e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59/volumes" Feb 23 14:15:37 crc kubenswrapper[4851]: I0223 14:15:37.022617 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:37 crc kubenswrapper[4851]: I0223 14:15:37.022969 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:37 crc kubenswrapper[4851]: I0223 14:15:37.077704 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:37 crc kubenswrapper[4851]: I0223 14:15:37.936233 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:37 crc kubenswrapper[4851]: I0223 14:15:37.990663 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltwx6"] Feb 23 14:15:38 crc kubenswrapper[4851]: I0223 14:15:38.969421 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:15:38 crc kubenswrapper[4851]: E0223 14:15:38.969894 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:15:39 crc kubenswrapper[4851]: I0223 14:15:39.906654 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ltwx6" podUID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerName="registry-server" containerID="cri-o://3851c82ba50ef068b1434a64545cff35864e46e6e7c5ac5f17b77018c63eafb3" gracePeriod=2 Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.019938 4851 scope.go:117] "RemoveContainer" containerID="6a8f17c1664770601b327f2d59e24ccb722fb499d3868b020fdb016019fb277e" Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.931254 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j8gdj"] Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.931662 4851 generic.go:334] "Generic (PLEG): container finished" podID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerID="3851c82ba50ef068b1434a64545cff35864e46e6e7c5ac5f17b77018c63eafb3" exitCode=0 Feb 23 14:15:40 crc kubenswrapper[4851]: E0223 14:15:40.932021 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" containerName="gather" Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.932042 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" containerName="gather" Feb 23 14:15:40 crc kubenswrapper[4851]: E0223 14:15:40.932091 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" containerName="copy" Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.932101 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" containerName="copy" Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.932373 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" containerName="gather" Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.932404 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="e105d69a-40d9-45ac-9cc4-6f7f3ceb8c59" containerName="copy" Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.935441 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltwx6" event={"ID":"3922734a-a713-42ac-a17d-4f89dd5cea38","Type":"ContainerDied","Data":"3851c82ba50ef068b1434a64545cff35864e46e6e7c5ac5f17b77018c63eafb3"} Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.935494 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ltwx6" event={"ID":"3922734a-a713-42ac-a17d-4f89dd5cea38","Type":"ContainerDied","Data":"83c679d64287a2070aac10f7ad8bf9e7069277137cc055bc4b3ac852f671643e"} Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.935543 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83c679d64287a2070aac10f7ad8bf9e7069277137cc055bc4b3ac852f671643e" Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.935578 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:40 crc kubenswrapper[4851]: I0223 14:15:40.949031 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8gdj"] Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.028802 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.055842 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-catalog-content\") pod \"certified-operators-j8gdj\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.056520 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-utilities\") pod \"certified-operators-j8gdj\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.056727 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s54nr\" (UniqueName: \"kubernetes.io/projected/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-kube-api-access-s54nr\") pod \"certified-operators-j8gdj\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.168155 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-utilities\") pod \"3922734a-a713-42ac-a17d-4f89dd5cea38\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.168217 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-catalog-content\") pod \"3922734a-a713-42ac-a17d-4f89dd5cea38\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.168286 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mzpm\" (UniqueName: \"kubernetes.io/projected/3922734a-a713-42ac-a17d-4f89dd5cea38-kube-api-access-6mzpm\") pod \"3922734a-a713-42ac-a17d-4f89dd5cea38\" (UID: \"3922734a-a713-42ac-a17d-4f89dd5cea38\") " Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.168687 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-catalog-content\") pod \"certified-operators-j8gdj\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.168729 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-utilities\") pod \"certified-operators-j8gdj\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.168776 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s54nr\" (UniqueName: \"kubernetes.io/projected/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-kube-api-access-s54nr\") pod \"certified-operators-j8gdj\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.170118 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-utilities" (OuterVolumeSpecName: "utilities") pod "3922734a-a713-42ac-a17d-4f89dd5cea38" (UID: "3922734a-a713-42ac-a17d-4f89dd5cea38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.170511 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-catalog-content\") pod \"certified-operators-j8gdj\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.170919 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-utilities\") pod \"certified-operators-j8gdj\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.177755 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3922734a-a713-42ac-a17d-4f89dd5cea38-kube-api-access-6mzpm" (OuterVolumeSpecName: "kube-api-access-6mzpm") pod "3922734a-a713-42ac-a17d-4f89dd5cea38" (UID: "3922734a-a713-42ac-a17d-4f89dd5cea38"). InnerVolumeSpecName "kube-api-access-6mzpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.212545 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s54nr\" (UniqueName: \"kubernetes.io/projected/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-kube-api-access-s54nr\") pod \"certified-operators-j8gdj\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.260482 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3922734a-a713-42ac-a17d-4f89dd5cea38" (UID: "3922734a-a713-42ac-a17d-4f89dd5cea38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.276834 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mzpm\" (UniqueName: \"kubernetes.io/projected/3922734a-a713-42ac-a17d-4f89dd5cea38-kube-api-access-6mzpm\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.276871 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.276883 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3922734a-a713-42ac-a17d-4f89dd5cea38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.328564 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.824072 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8gdj"] Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.943365 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8gdj" event={"ID":"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e","Type":"ContainerStarted","Data":"238c6ca863f5839f9961639a5dfcb3cf3a6ae1b795c0bb85611cf148e8e9569b"} Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.943411 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ltwx6" Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.980260 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ltwx6"] Feb 23 14:15:41 crc kubenswrapper[4851]: I0223 14:15:41.988452 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ltwx6"] Feb 23 14:15:42 crc kubenswrapper[4851]: I0223 14:15:42.952209 4851 generic.go:334] "Generic (PLEG): container finished" podID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerID="341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf" exitCode=0 Feb 23 14:15:42 crc kubenswrapper[4851]: I0223 14:15:42.952256 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8gdj" event={"ID":"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e","Type":"ContainerDied","Data":"341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf"} Feb 23 14:15:43 crc kubenswrapper[4851]: I0223 14:15:43.963359 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8gdj" event={"ID":"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e","Type":"ContainerStarted","Data":"6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556"} Feb 23 14:15:43 crc kubenswrapper[4851]: I0223 14:15:43.982684 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3922734a-a713-42ac-a17d-4f89dd5cea38" path="/var/lib/kubelet/pods/3922734a-a713-42ac-a17d-4f89dd5cea38/volumes" Feb 23 14:15:44 crc kubenswrapper[4851]: I0223 14:15:44.975625 4851 generic.go:334] "Generic (PLEG): container finished" podID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerID="6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556" exitCode=0 Feb 23 14:15:44 crc kubenswrapper[4851]: I0223 14:15:44.975672 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8gdj" event={"ID":"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e","Type":"ContainerDied","Data":"6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556"} Feb 23 14:15:45 crc kubenswrapper[4851]: I0223 14:15:45.988761 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8gdj" event={"ID":"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e","Type":"ContainerStarted","Data":"352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291"} Feb 23 14:15:46 crc kubenswrapper[4851]: I0223 14:15:46.011659 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j8gdj" podStartSLOduration=3.60723358 podStartE2EDuration="6.011635138s" podCreationTimestamp="2026-02-23 14:15:40 +0000 UTC" firstStartedPulling="2026-02-23 14:15:42.954569366 +0000 UTC m=+4097.636273044" lastFinishedPulling="2026-02-23 14:15:45.358970924 +0000 UTC m=+4100.040674602" observedRunningTime="2026-02-23 14:15:46.002731226 +0000 UTC m=+4100.684434924" watchObservedRunningTime="2026-02-23 14:15:46.011635138 +0000 UTC m=+4100.693338816" Feb 23 14:15:50 crc kubenswrapper[4851]: I0223 14:15:50.969118 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:15:50 crc kubenswrapper[4851]: E0223 14:15:50.969905 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:15:51 crc kubenswrapper[4851]: I0223 14:15:51.329414 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:51 crc kubenswrapper[4851]: I0223 14:15:51.330571 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:51 crc kubenswrapper[4851]: I0223 14:15:51.382093 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:52 crc kubenswrapper[4851]: I0223 14:15:52.077527 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:52 crc kubenswrapper[4851]: I0223 14:15:52.123885 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8gdj"] Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.045782 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j8gdj" podUID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerName="registry-server" containerID="cri-o://352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291" gracePeriod=2 Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.519853 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.626071 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s54nr\" (UniqueName: \"kubernetes.io/projected/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-kube-api-access-s54nr\") pod \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.627122 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-catalog-content\") pod \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.627272 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-utilities\") pod \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\" (UID: \"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e\") " Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.628014 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-utilities" (OuterVolumeSpecName: "utilities") pod "d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" (UID: "d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.632134 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-kube-api-access-s54nr" (OuterVolumeSpecName: "kube-api-access-s54nr") pod "d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" (UID: "d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e"). InnerVolumeSpecName "kube-api-access-s54nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.729806 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s54nr\" (UniqueName: \"kubernetes.io/projected/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-kube-api-access-s54nr\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.729846 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.784795 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" (UID: "d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:15:54 crc kubenswrapper[4851]: I0223 14:15:54.831844 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.056627 4851 generic.go:334] "Generic (PLEG): container finished" podID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerID="352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291" exitCode=0 Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.056673 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8gdj" event={"ID":"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e","Type":"ContainerDied","Data":"352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291"} Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.056687 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8gdj" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.056698 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8gdj" event={"ID":"d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e","Type":"ContainerDied","Data":"238c6ca863f5839f9961639a5dfcb3cf3a6ae1b795c0bb85611cf148e8e9569b"} Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.056712 4851 scope.go:117] "RemoveContainer" containerID="352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.084270 4851 scope.go:117] "RemoveContainer" containerID="6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.094901 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8gdj"] Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.104152 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j8gdj"] Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.118320 4851 scope.go:117] "RemoveContainer" containerID="341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.155625 4851 scope.go:117] "RemoveContainer" containerID="352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291" Feb 23 14:15:55 crc kubenswrapper[4851]: E0223 14:15:55.156187 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291\": container with ID starting with 352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291 not found: ID does not exist" containerID="352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.156225 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291"} err="failed to get container status \"352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291\": rpc error: code = NotFound desc = could not find container \"352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291\": container with ID starting with 352dea33faf1f5a940c489a5c9de8eca47f585ec1a2dc2440699ab6a72697291 not found: ID does not exist" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.156253 4851 scope.go:117] "RemoveContainer" containerID="6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556" Feb 23 14:15:55 crc kubenswrapper[4851]: E0223 14:15:55.159434 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556\": container with ID starting with 6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556 not found: ID does not exist" containerID="6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.159473 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556"} err="failed to get container status \"6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556\": rpc error: code = NotFound desc = could not find container \"6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556\": container with ID starting with 6951cf514a15835881cabb369acc13659538551c1f0023725d6cd44a7ed1f556 not found: ID does not exist" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.159495 4851 scope.go:117] "RemoveContainer" containerID="341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf" Feb 23 14:15:55 crc kubenswrapper[4851]: E0223 14:15:55.159982 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf\": container with ID starting with 341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf not found: ID does not exist" containerID="341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.160002 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf"} err="failed to get container status \"341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf\": rpc error: code = NotFound desc = could not find container \"341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf\": container with ID starting with 341fbd6bdedb90d2d2649ffb63e66786cd9749175daea0ee10f518b9b3086dbf not found: ID does not exist" Feb 23 14:15:55 crc kubenswrapper[4851]: I0223 14:15:55.982209 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" path="/var/lib/kubelet/pods/d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e/volumes" Feb 23 14:16:03 crc kubenswrapper[4851]: I0223 14:16:03.973055 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:16:03 crc kubenswrapper[4851]: E0223 14:16:03.973821 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:16:18 crc kubenswrapper[4851]: I0223 14:16:18.969278 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:16:18 crc kubenswrapper[4851]: E0223 14:16:18.970415 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:16:30 crc kubenswrapper[4851]: I0223 14:16:30.969298 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:16:30 crc kubenswrapper[4851]: E0223 14:16:30.970275 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:16:44 crc kubenswrapper[4851]: I0223 14:16:44.968416 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:16:44 crc kubenswrapper[4851]: E0223 14:16:44.969246 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:16:58 crc kubenswrapper[4851]: I0223 14:16:58.969165 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:16:58 crc kubenswrapper[4851]: E0223 14:16:58.970127 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:17:09 crc kubenswrapper[4851]: I0223 14:17:09.969547 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:17:09 crc kubenswrapper[4851]: E0223 14:17:09.970276 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:17:24 crc kubenswrapper[4851]: I0223 14:17:24.968774 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:17:24 crc kubenswrapper[4851]: E0223 14:17:24.969718 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:17:35 crc kubenswrapper[4851]: I0223 14:17:35.974993 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:17:35 crc kubenswrapper[4851]: E0223 14:17:35.975821 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:17:47 crc kubenswrapper[4851]: I0223 14:17:47.970036 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:17:47 crc kubenswrapper[4851]: E0223 14:17:47.970951 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.039614 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ng22v"] Feb 23 14:17:58 crc kubenswrapper[4851]: E0223 14:17:58.040689 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerName="extract-content" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.040707 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerName="extract-content" Feb 23 14:17:58 crc kubenswrapper[4851]: E0223 14:17:58.040722 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerName="extract-utilities" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.040733 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerName="extract-utilities" Feb 23 14:17:58 crc kubenswrapper[4851]: E0223 14:17:58.040749 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerName="extract-content" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.040757 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerName="extract-content" Feb 23 14:17:58 crc kubenswrapper[4851]: E0223 14:17:58.040782 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerName="registry-server" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.040790 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerName="registry-server" Feb 23 14:17:58 crc kubenswrapper[4851]: E0223 14:17:58.040809 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerName="registry-server" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.040817 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerName="registry-server" Feb 23 14:17:58 crc kubenswrapper[4851]: E0223 14:17:58.040833 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerName="extract-utilities" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.040841 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerName="extract-utilities" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.041083 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e282f5-8d7a-4ad8-ad0f-53c6d8cfd92e" containerName="registry-server" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.041123 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="3922734a-a713-42ac-a17d-4f89dd5cea38" containerName="registry-server" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.043124 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ng22v"] Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.043252 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.136209 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-utilities\") pod \"redhat-operators-ng22v\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.136350 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-catalog-content\") pod \"redhat-operators-ng22v\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.136421 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntbm\" (UniqueName: \"kubernetes.io/projected/a8da8101-fc46-4a82-a83a-6701fc281e47-kube-api-access-bntbm\") pod \"redhat-operators-ng22v\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.238061 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-utilities\") pod \"redhat-operators-ng22v\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.238191 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-catalog-content\") pod \"redhat-operators-ng22v\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.238242 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bntbm\" (UniqueName: \"kubernetes.io/projected/a8da8101-fc46-4a82-a83a-6701fc281e47-kube-api-access-bntbm\") pod \"redhat-operators-ng22v\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.238768 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-utilities\") pod \"redhat-operators-ng22v\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.238808 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-catalog-content\") pod \"redhat-operators-ng22v\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.258209 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntbm\" (UniqueName: \"kubernetes.io/projected/a8da8101-fc46-4a82-a83a-6701fc281e47-kube-api-access-bntbm\") pod \"redhat-operators-ng22v\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.371953 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:17:58 crc kubenswrapper[4851]: I0223 14:17:58.822421 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ng22v"] Feb 23 14:17:59 crc kubenswrapper[4851]: I0223 14:17:59.333253 4851 generic.go:334] "Generic (PLEG): container finished" podID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerID="0433f30ac32ca368d4e7777af5157d5ebab0a25994d9edef4401cc82a60ccb6c" exitCode=0 Feb 23 14:17:59 crc kubenswrapper[4851]: I0223 14:17:59.333310 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng22v" event={"ID":"a8da8101-fc46-4a82-a83a-6701fc281e47","Type":"ContainerDied","Data":"0433f30ac32ca368d4e7777af5157d5ebab0a25994d9edef4401cc82a60ccb6c"} Feb 23 14:17:59 crc kubenswrapper[4851]: I0223 14:17:59.333515 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng22v" event={"ID":"a8da8101-fc46-4a82-a83a-6701fc281e47","Type":"ContainerStarted","Data":"182a5eb0b55bc4a3b055b615e7f12e3e104e1f282e8e4452d680b21ecdd2b227"} Feb 23 14:18:00 crc kubenswrapper[4851]: I0223 14:18:00.343785 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng22v" event={"ID":"a8da8101-fc46-4a82-a83a-6701fc281e47","Type":"ContainerStarted","Data":"513dd1108a5cfd2184b8ae54bfe207ccf5e6836a3801bb75b25218c077b14294"} Feb 23 14:18:00 crc kubenswrapper[4851]: I0223 14:18:00.968812 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:18:00 crc kubenswrapper[4851]: E0223 14:18:00.969134 4851 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-npswg_openshift-machine-config-operator(c5a296ee-a904-4283-8849-65abb16717b4)\"" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" Feb 23 14:18:01 crc kubenswrapper[4851]: I0223 14:18:01.355669 4851 generic.go:334] "Generic (PLEG): container finished" podID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerID="513dd1108a5cfd2184b8ae54bfe207ccf5e6836a3801bb75b25218c077b14294" exitCode=0 Feb 23 14:18:01 crc kubenswrapper[4851]: I0223 14:18:01.355730 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng22v" event={"ID":"a8da8101-fc46-4a82-a83a-6701fc281e47","Type":"ContainerDied","Data":"513dd1108a5cfd2184b8ae54bfe207ccf5e6836a3801bb75b25218c077b14294"} Feb 23 14:18:02 crc kubenswrapper[4851]: I0223 14:18:02.380230 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng22v" event={"ID":"a8da8101-fc46-4a82-a83a-6701fc281e47","Type":"ContainerStarted","Data":"ea12ee0c4f8429720b44b043741b1353975cef8ca99eb55b8d2f1975c3666e9c"} Feb 23 14:18:02 crc kubenswrapper[4851]: I0223 14:18:02.406099 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ng22v" podStartSLOduration=2.975451812 podStartE2EDuration="5.406081284s" podCreationTimestamp="2026-02-23 14:17:57 +0000 UTC" firstStartedPulling="2026-02-23 14:17:59.335437147 +0000 UTC m=+4234.017140825" lastFinishedPulling="2026-02-23 14:18:01.766066619 +0000 UTC m=+4236.447770297" observedRunningTime="2026-02-23 14:18:02.402637737 +0000 UTC m=+4237.084341425" watchObservedRunningTime="2026-02-23 14:18:02.406081284 +0000 UTC m=+4237.087784962" Feb 23 14:18:08 crc kubenswrapper[4851]: I0223 14:18:08.372577 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:18:08 crc kubenswrapper[4851]: I0223 14:18:08.373213 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:18:08 crc kubenswrapper[4851]: I0223 14:18:08.422466 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:18:08 crc kubenswrapper[4851]: I0223 14:18:08.477433 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:18:08 crc kubenswrapper[4851]: I0223 14:18:08.660770 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ng22v"] Feb 23 14:18:10 crc kubenswrapper[4851]: I0223 14:18:10.444184 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ng22v" podUID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerName="registry-server" containerID="cri-o://ea12ee0c4f8429720b44b043741b1353975cef8ca99eb55b8d2f1975c3666e9c" gracePeriod=2 Feb 23 14:18:11 crc kubenswrapper[4851]: I0223 14:18:11.456921 4851 generic.go:334] "Generic (PLEG): container finished" podID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerID="ea12ee0c4f8429720b44b043741b1353975cef8ca99eb55b8d2f1975c3666e9c" exitCode=0 Feb 23 14:18:11 crc kubenswrapper[4851]: I0223 14:18:11.457148 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng22v" event={"ID":"a8da8101-fc46-4a82-a83a-6701fc281e47","Type":"ContainerDied","Data":"ea12ee0c4f8429720b44b043741b1353975cef8ca99eb55b8d2f1975c3666e9c"} Feb 23 14:18:12 crc kubenswrapper[4851]: I0223 14:18:12.467201 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng22v" event={"ID":"a8da8101-fc46-4a82-a83a-6701fc281e47","Type":"ContainerDied","Data":"182a5eb0b55bc4a3b055b615e7f12e3e104e1f282e8e4452d680b21ecdd2b227"} Feb 23 14:18:12 crc kubenswrapper[4851]: I0223 14:18:12.467484 4851 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182a5eb0b55bc4a3b055b615e7f12e3e104e1f282e8e4452d680b21ecdd2b227" Feb 23 14:18:12 crc kubenswrapper[4851]: I0223 14:18:12.804871 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:18:12 crc kubenswrapper[4851]: I0223 14:18:12.915557 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-utilities\") pod \"a8da8101-fc46-4a82-a83a-6701fc281e47\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " Feb 23 14:18:12 crc kubenswrapper[4851]: I0223 14:18:12.915928 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-catalog-content\") pod \"a8da8101-fc46-4a82-a83a-6701fc281e47\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " Feb 23 14:18:12 crc kubenswrapper[4851]: I0223 14:18:12.916084 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bntbm\" (UniqueName: \"kubernetes.io/projected/a8da8101-fc46-4a82-a83a-6701fc281e47-kube-api-access-bntbm\") pod \"a8da8101-fc46-4a82-a83a-6701fc281e47\" (UID: \"a8da8101-fc46-4a82-a83a-6701fc281e47\") " Feb 23 14:18:12 crc kubenswrapper[4851]: I0223 14:18:12.917016 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-utilities" (OuterVolumeSpecName: "utilities") pod "a8da8101-fc46-4a82-a83a-6701fc281e47" (UID: "a8da8101-fc46-4a82-a83a-6701fc281e47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:18:12 crc kubenswrapper[4851]: I0223 14:18:12.921100 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8da8101-fc46-4a82-a83a-6701fc281e47-kube-api-access-bntbm" (OuterVolumeSpecName: "kube-api-access-bntbm") pod "a8da8101-fc46-4a82-a83a-6701fc281e47" (UID: "a8da8101-fc46-4a82-a83a-6701fc281e47"). InnerVolumeSpecName "kube-api-access-bntbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:18:13 crc kubenswrapper[4851]: I0223 14:18:13.018290 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bntbm\" (UniqueName: \"kubernetes.io/projected/a8da8101-fc46-4a82-a83a-6701fc281e47-kube-api-access-bntbm\") on node \"crc\" DevicePath \"\"" Feb 23 14:18:13 crc kubenswrapper[4851]: I0223 14:18:13.018322 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 14:18:13 crc kubenswrapper[4851]: I0223 14:18:13.039944 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8da8101-fc46-4a82-a83a-6701fc281e47" (UID: "a8da8101-fc46-4a82-a83a-6701fc281e47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:18:13 crc kubenswrapper[4851]: I0223 14:18:13.119907 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8da8101-fc46-4a82-a83a-6701fc281e47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 14:18:13 crc kubenswrapper[4851]: I0223 14:18:13.474784 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng22v" Feb 23 14:18:13 crc kubenswrapper[4851]: I0223 14:18:13.516413 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ng22v"] Feb 23 14:18:13 crc kubenswrapper[4851]: I0223 14:18:13.524266 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ng22v"] Feb 23 14:18:13 crc kubenswrapper[4851]: I0223 14:18:13.979996 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8da8101-fc46-4a82-a83a-6701fc281e47" path="/var/lib/kubelet/pods/a8da8101-fc46-4a82-a83a-6701fc281e47/volumes" Feb 23 14:18:14 crc kubenswrapper[4851]: I0223 14:18:14.969863 4851 scope.go:117] "RemoveContainer" containerID="90934b0740f1354c517db5de57c701c567aec000ad1db106aee01a2c6e2bae9b" Feb 23 14:18:15 crc kubenswrapper[4851]: I0223 14:18:15.495777 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-npswg" event={"ID":"c5a296ee-a904-4283-8849-65abb16717b4","Type":"ContainerStarted","Data":"ec33fac1dc1725ba8a540ad2691bc4d1c5fa6bb3355286605eaf590eb0dbf9a9"} Feb 23 14:18:20 crc kubenswrapper[4851]: I0223 14:18:20.943392 4851 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hqp8l"] Feb 23 14:18:20 crc kubenswrapper[4851]: E0223 14:18:20.944408 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerName="registry-server" Feb 23 14:18:20 crc kubenswrapper[4851]: I0223 14:18:20.944425 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerName="registry-server" Feb 23 14:18:20 crc kubenswrapper[4851]: E0223 14:18:20.944470 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerName="extract-utilities" Feb 23 14:18:20 crc kubenswrapper[4851]: I0223 14:18:20.944478 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerName="extract-utilities" Feb 23 14:18:20 crc kubenswrapper[4851]: E0223 14:18:20.944494 4851 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerName="extract-content" Feb 23 14:18:20 crc kubenswrapper[4851]: I0223 14:18:20.944502 4851 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerName="extract-content" Feb 23 14:18:20 crc kubenswrapper[4851]: I0223 14:18:20.944706 4851 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8da8101-fc46-4a82-a83a-6701fc281e47" containerName="registry-server" Feb 23 14:18:20 crc kubenswrapper[4851]: I0223 14:18:20.946457 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:20 crc kubenswrapper[4851]: I0223 14:18:20.964850 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqp8l"] Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.082666 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-catalog-content\") pod \"redhat-marketplace-hqp8l\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.082754 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-utilities\") pod \"redhat-marketplace-hqp8l\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.083047 4851 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb72n\" (UniqueName: \"kubernetes.io/projected/f1931529-49b9-4943-8bfd-8179301e8f6e-kube-api-access-tb72n\") pod \"redhat-marketplace-hqp8l\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.184636 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-utilities\") pod \"redhat-marketplace-hqp8l\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.185042 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb72n\" (UniqueName: \"kubernetes.io/projected/f1931529-49b9-4943-8bfd-8179301e8f6e-kube-api-access-tb72n\") pod \"redhat-marketplace-hqp8l\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.185123 4851 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-catalog-content\") pod \"redhat-marketplace-hqp8l\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.185292 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-utilities\") pod \"redhat-marketplace-hqp8l\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.185504 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-catalog-content\") pod \"redhat-marketplace-hqp8l\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.210267 4851 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb72n\" (UniqueName: \"kubernetes.io/projected/f1931529-49b9-4943-8bfd-8179301e8f6e-kube-api-access-tb72n\") pod \"redhat-marketplace-hqp8l\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.301297 4851 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:21 crc kubenswrapper[4851]: I0223 14:18:21.736396 4851 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqp8l"] Feb 23 14:18:22 crc kubenswrapper[4851]: I0223 14:18:22.552578 4851 generic.go:334] "Generic (PLEG): container finished" podID="f1931529-49b9-4943-8bfd-8179301e8f6e" containerID="159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603" exitCode=0 Feb 23 14:18:22 crc kubenswrapper[4851]: I0223 14:18:22.552649 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqp8l" event={"ID":"f1931529-49b9-4943-8bfd-8179301e8f6e","Type":"ContainerDied","Data":"159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603"} Feb 23 14:18:22 crc kubenswrapper[4851]: I0223 14:18:22.552845 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqp8l" event={"ID":"f1931529-49b9-4943-8bfd-8179301e8f6e","Type":"ContainerStarted","Data":"7f3214f06c54575f7cefb8cae563154bf447752acb217b673d0c784b0614ccf0"} Feb 23 14:18:23 crc kubenswrapper[4851]: I0223 14:18:23.564528 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqp8l" event={"ID":"f1931529-49b9-4943-8bfd-8179301e8f6e","Type":"ContainerStarted","Data":"6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0"} Feb 23 14:18:24 crc kubenswrapper[4851]: I0223 14:18:24.576373 4851 generic.go:334] "Generic (PLEG): container finished" podID="f1931529-49b9-4943-8bfd-8179301e8f6e" containerID="6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0" exitCode=0 Feb 23 14:18:24 crc kubenswrapper[4851]: I0223 14:18:24.576444 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqp8l" event={"ID":"f1931529-49b9-4943-8bfd-8179301e8f6e","Type":"ContainerDied","Data":"6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0"} Feb 23 14:18:27 crc kubenswrapper[4851]: I0223 14:18:27.604381 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqp8l" event={"ID":"f1931529-49b9-4943-8bfd-8179301e8f6e","Type":"ContainerStarted","Data":"b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01"} Feb 23 14:18:27 crc kubenswrapper[4851]: I0223 14:18:27.624011 4851 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hqp8l" podStartSLOduration=3.165041972 podStartE2EDuration="7.623994104s" podCreationTimestamp="2026-02-23 14:18:20 +0000 UTC" firstStartedPulling="2026-02-23 14:18:22.554025777 +0000 UTC m=+4257.235729455" lastFinishedPulling="2026-02-23 14:18:27.012977899 +0000 UTC m=+4261.694681587" observedRunningTime="2026-02-23 14:18:27.619064625 +0000 UTC m=+4262.300768313" watchObservedRunningTime="2026-02-23 14:18:27.623994104 +0000 UTC m=+4262.305697782" Feb 23 14:18:31 crc kubenswrapper[4851]: I0223 14:18:31.302631 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:31 crc kubenswrapper[4851]: I0223 14:18:31.303124 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:31 crc kubenswrapper[4851]: I0223 14:18:31.366988 4851 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:41 crc kubenswrapper[4851]: I0223 14:18:41.345611 4851 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:41 crc kubenswrapper[4851]: I0223 14:18:41.391980 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqp8l"] Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.272798 4851 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hqp8l" podUID="f1931529-49b9-4943-8bfd-8179301e8f6e" containerName="registry-server" containerID="cri-o://b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01" gracePeriod=2 Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.717221 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.887007 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-catalog-content\") pod \"f1931529-49b9-4943-8bfd-8179301e8f6e\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.887146 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-utilities\") pod \"f1931529-49b9-4943-8bfd-8179301e8f6e\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.887204 4851 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb72n\" (UniqueName: \"kubernetes.io/projected/f1931529-49b9-4943-8bfd-8179301e8f6e-kube-api-access-tb72n\") pod \"f1931529-49b9-4943-8bfd-8179301e8f6e\" (UID: \"f1931529-49b9-4943-8bfd-8179301e8f6e\") " Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.888815 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-utilities" (OuterVolumeSpecName: "utilities") pod "f1931529-49b9-4943-8bfd-8179301e8f6e" (UID: "f1931529-49b9-4943-8bfd-8179301e8f6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.892458 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1931529-49b9-4943-8bfd-8179301e8f6e-kube-api-access-tb72n" (OuterVolumeSpecName: "kube-api-access-tb72n") pod "f1931529-49b9-4943-8bfd-8179301e8f6e" (UID: "f1931529-49b9-4943-8bfd-8179301e8f6e"). InnerVolumeSpecName "kube-api-access-tb72n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.911710 4851 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1931529-49b9-4943-8bfd-8179301e8f6e" (UID: "f1931529-49b9-4943-8bfd-8179301e8f6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.989900 4851 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.989933 4851 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1931529-49b9-4943-8bfd-8179301e8f6e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 14:18:42 crc kubenswrapper[4851]: I0223 14:18:42.989943 4851 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb72n\" (UniqueName: \"kubernetes.io/projected/f1931529-49b9-4943-8bfd-8179301e8f6e-kube-api-access-tb72n\") on node \"crc\" DevicePath \"\"" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.285286 4851 generic.go:334] "Generic (PLEG): container finished" podID="f1931529-49b9-4943-8bfd-8179301e8f6e" containerID="b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01" exitCode=0 Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.285358 4851 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hqp8l" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.285377 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqp8l" event={"ID":"f1931529-49b9-4943-8bfd-8179301e8f6e","Type":"ContainerDied","Data":"b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01"} Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.285463 4851 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hqp8l" event={"ID":"f1931529-49b9-4943-8bfd-8179301e8f6e","Type":"ContainerDied","Data":"7f3214f06c54575f7cefb8cae563154bf447752acb217b673d0c784b0614ccf0"} Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.285498 4851 scope.go:117] "RemoveContainer" containerID="b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.318472 4851 scope.go:117] "RemoveContainer" containerID="6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.326577 4851 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqp8l"] Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.341162 4851 scope.go:117] "RemoveContainer" containerID="159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.341385 4851 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hqp8l"] Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.378234 4851 scope.go:117] "RemoveContainer" containerID="b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01" Feb 23 14:18:43 crc kubenswrapper[4851]: E0223 14:18:43.378770 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01\": container with ID starting with b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01 not found: ID does not exist" containerID="b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.378828 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01"} err="failed to get container status \"b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01\": rpc error: code = NotFound desc = could not find container \"b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01\": container with ID starting with b6247a44251dff46fddcdb658da675b776e1e5c4e2526eae7f0d7bc53b074a01 not found: ID does not exist" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.378855 4851 scope.go:117] "RemoveContainer" containerID="6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0" Feb 23 14:18:43 crc kubenswrapper[4851]: E0223 14:18:43.379247 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0\": container with ID starting with 6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0 not found: ID does not exist" containerID="6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.379285 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0"} err="failed to get container status \"6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0\": rpc error: code = NotFound desc = could not find container \"6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0\": container with ID starting with 6c73fdb2c99233a1e9195be6371de5a0071ddab02d4ffb1d9b5038e6d5402ca0 not found: ID does not exist" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.379310 4851 scope.go:117] "RemoveContainer" containerID="159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603" Feb 23 14:18:43 crc kubenswrapper[4851]: E0223 14:18:43.379594 4851 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603\": container with ID starting with 159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603 not found: ID does not exist" containerID="159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603" Feb 23 14:18:43 crc kubenswrapper[4851]: I0223 14:18:43.379618 4851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603"} err="failed to get container status \"159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603\": rpc error: code = NotFound desc = could not find container \"159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603\": container with ID starting with 159aa423acc2003ac4c5aca656e339f1d3a267276c625a54bc33d710f98e2603 not found: ID does not exist" Feb 23 14:18:44 crc kubenswrapper[4851]: I0223 14:18:44.006168 4851 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1931529-49b9-4943-8bfd-8179301e8f6e" path="/var/lib/kubelet/pods/f1931529-49b9-4943-8bfd-8179301e8f6e/volumes" Feb 23 14:20:41 crc kubenswrapper[4851]: I0223 14:20:41.925213 4851 patch_prober.go:28] interesting pod/machine-config-daemon-npswg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 14:20:41 crc kubenswrapper[4851]: I0223 14:20:41.925841 4851 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-npswg" podUID="c5a296ee-a904-4283-8849-65abb16717b4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"